00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 98 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3599 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.134 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.528 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.540 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.553 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.553 > git config core.sparsecheckout # timeout=10 00:00:06.566 > git read-tree -mu HEAD # timeout=10 00:00:06.583 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.603 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.603 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.719 [Pipeline] Start of Pipeline 00:00:06.749 [Pipeline] library 00:00:06.750 Loading library shm_lib@master 00:00:06.751 Library shm_lib@master is cached. Copying from home. 00:00:06.765 [Pipeline] node 00:00:06.775 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.776 [Pipeline] { 00:00:06.787 [Pipeline] catchError 00:00:06.788 [Pipeline] { 00:00:06.801 [Pipeline] wrap 00:00:06.810 [Pipeline] { 00:00:06.816 [Pipeline] stage 00:00:06.818 [Pipeline] { (Prologue) 00:00:07.030 [Pipeline] sh 00:00:07.309 + logger -p user.info -t JENKINS-CI 00:00:07.326 [Pipeline] echo 00:00:07.328 Node: GP11 00:00:07.337 [Pipeline] sh 00:00:07.635 [Pipeline] setCustomBuildProperty 00:00:07.645 [Pipeline] echo 00:00:07.646 Cleanup processes 00:00:07.651 [Pipeline] sh 00:00:07.931 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.931 1138463 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.945 [Pipeline] sh 00:00:08.228 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.228 ++ grep -v 'sudo pgrep' 00:00:08.228 ++ awk '{print $1}' 00:00:08.228 + sudo kill -9 00:00:08.228 + true 00:00:08.245 [Pipeline] cleanWs 00:00:08.258 [WS-CLEANUP] Deleting project workspace... 00:00:08.258 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.266 [WS-CLEANUP] done 00:00:08.269 [Pipeline] setCustomBuildProperty 00:00:08.282 [Pipeline] sh 00:00:08.565 + sudo git config --global --replace-all safe.directory '*' 00:00:08.630 [Pipeline] httpRequest 00:00:09.373 [Pipeline] echo 00:00:09.374 Sorcerer 10.211.164.101 is alive 00:00:09.384 [Pipeline] retry 00:00:09.386 [Pipeline] { 00:00:09.398 [Pipeline] httpRequest 00:00:09.402 HttpMethod: GET 00:00:09.403 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.403 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.406 Response Code: HTTP/1.1 200 OK 00:00:09.406 Success: Status code 200 is in the accepted range: 200,404 00:00:09.407 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.516 [Pipeline] } 00:00:10.531 [Pipeline] // retry 00:00:10.538 [Pipeline] sh 00:00:10.821 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.837 [Pipeline] httpRequest 00:00:11.189 [Pipeline] echo 00:00:11.191 Sorcerer 10.211.164.101 is alive 00:00:11.202 [Pipeline] retry 00:00:11.204 [Pipeline] { 00:00:11.222 [Pipeline] httpRequest 00:00:11.226 HttpMethod: GET 00:00:11.226 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.227 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.242 Response Code: HTTP/1.1 200 OK 00:00:11.242 Success: Status code 200 is in the accepted range: 200,404 00:00:11.242 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:38.045 [Pipeline] } 00:01:38.062 [Pipeline] // retry 00:01:38.069 [Pipeline] sh 00:01:38.354 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:40.901 [Pipeline] sh 00:01:41.188 + git -C spdk log --oneline -n5 00:01:41.188 b18e1bd62 version: v24.09.1-pre 00:01:41.188 19524ad45 version: v24.09 00:01:41.188 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:41.188 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:41.188 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:41.207 [Pipeline] withCredentials 00:01:41.220 > git --version # timeout=10 00:01:41.233 > git --version # 'git version 2.39.2' 00:01:41.253 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:41.255 [Pipeline] { 00:01:41.263 [Pipeline] retry 00:01:41.265 [Pipeline] { 00:01:41.280 [Pipeline] sh 00:01:41.566 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:41.839 [Pipeline] } 00:01:41.856 [Pipeline] // retry 00:01:41.861 [Pipeline] } 00:01:41.877 [Pipeline] // withCredentials 00:01:41.887 [Pipeline] httpRequest 00:01:42.306 [Pipeline] echo 00:01:42.308 Sorcerer 10.211.164.101 is alive 00:01:42.318 [Pipeline] retry 00:01:42.320 [Pipeline] { 00:01:42.334 [Pipeline] httpRequest 00:01:42.339 HttpMethod: GET 00:01:42.339 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.340 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.352 Response Code: HTTP/1.1 200 OK 00:01:42.353 Success: Status code 200 is in the accepted range: 200,404 00:01:42.353 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:52.045 [Pipeline] } 00:01:52.061 [Pipeline] // retry 00:01:52.067 [Pipeline] sh 00:01:52.350 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.267 [Pipeline] sh 00:01:54.555 + git -C dpdk log --oneline -n5 00:01:54.555 eeb0605f11 version: 23.11.0 00:01:54.555 238778122a doc: update release notes for 23.11 00:01:54.555 46aa6b3cfc doc: fix description of RSS features 00:01:54.555 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:54.555 7e421ae345 devtools: support skipping forbid rule check 00:01:54.566 [Pipeline] } 00:01:54.580 [Pipeline] // stage 00:01:54.589 [Pipeline] stage 00:01:54.591 [Pipeline] { (Prepare) 00:01:54.611 [Pipeline] writeFile 00:01:54.624 [Pipeline] sh 00:01:54.907 + logger -p user.info -t JENKINS-CI 00:01:54.921 [Pipeline] sh 00:01:55.206 + logger -p user.info -t JENKINS-CI 00:01:55.219 [Pipeline] sh 00:01:55.507 + cat autorun-spdk.conf 00:01:55.507 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.507 SPDK_TEST_NVMF=1 00:01:55.507 SPDK_TEST_NVME_CLI=1 00:01:55.507 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.507 SPDK_TEST_NVMF_NICS=e810 00:01:55.507 SPDK_TEST_VFIOUSER=1 00:01:55.507 SPDK_RUN_UBSAN=1 00:01:55.507 NET_TYPE=phy 00:01:55.507 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.507 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.516 RUN_NIGHTLY=1 00:01:55.521 [Pipeline] readFile 00:01:55.544 [Pipeline] withEnv 00:01:55.546 [Pipeline] { 00:01:55.558 [Pipeline] sh 00:01:55.846 + set -ex 00:01:55.846 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:55.846 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.846 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.846 ++ SPDK_TEST_NVMF=1 00:01:55.846 ++ SPDK_TEST_NVME_CLI=1 00:01:55.846 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.846 ++ SPDK_TEST_NVMF_NICS=e810 00:01:55.846 ++ SPDK_TEST_VFIOUSER=1 00:01:55.846 ++ SPDK_RUN_UBSAN=1 00:01:55.846 ++ NET_TYPE=phy 00:01:55.846 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.846 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.846 ++ RUN_NIGHTLY=1 00:01:55.846 + case $SPDK_TEST_NVMF_NICS in 00:01:55.846 + DRIVERS=ice 00:01:55.846 + [[ tcp == \r\d\m\a ]] 00:01:55.846 + [[ -n ice ]] 00:01:55.846 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:55.846 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:55.846 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:55.846 rmmod: ERROR: Module irdma is not currently loaded 00:01:55.846 rmmod: ERROR: Module i40iw is not currently loaded 00:01:55.846 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:55.846 + true 00:01:55.846 + for D in $DRIVERS 00:01:55.846 + sudo modprobe ice 00:01:55.846 + exit 0 00:01:55.857 [Pipeline] } 00:01:55.873 [Pipeline] // withEnv 00:01:55.878 [Pipeline] } 00:01:55.892 [Pipeline] // stage 00:01:55.901 [Pipeline] catchError 00:01:55.903 [Pipeline] { 00:01:55.916 [Pipeline] timeout 00:01:55.917 Timeout set to expire in 1 hr 0 min 00:01:55.918 [Pipeline] { 00:01:55.933 [Pipeline] stage 00:01:55.935 [Pipeline] { (Tests) 00:01:55.950 [Pipeline] sh 00:01:56.236 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.236 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.236 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.236 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:56.236 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.236 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:56.236 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:56.236 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:56.236 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:56.236 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:56.236 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:56.236 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.237 + source /etc/os-release 00:01:56.237 ++ NAME='Fedora Linux' 00:01:56.237 ++ VERSION='39 (Cloud Edition)' 00:01:56.237 ++ ID=fedora 00:01:56.237 ++ VERSION_ID=39 00:01:56.237 ++ VERSION_CODENAME= 00:01:56.237 ++ PLATFORM_ID=platform:f39 00:01:56.237 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:56.237 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:56.237 ++ LOGO=fedora-logo-icon 00:01:56.237 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:56.237 ++ HOME_URL=https://fedoraproject.org/ 00:01:56.237 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:56.237 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:56.237 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:56.237 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:56.237 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:56.237 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:56.237 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:56.237 ++ SUPPORT_END=2024-11-12 00:01:56.237 ++ VARIANT='Cloud Edition' 00:01:56.237 ++ VARIANT_ID=cloud 00:01:56.237 + uname -a 00:01:56.237 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:56.237 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:57.177 Hugepages 00:01:57.177 node hugesize free / total 00:01:57.177 node0 1048576kB 0 / 0 00:01:57.177 node0 2048kB 0 / 0 00:01:57.177 node1 1048576kB 0 / 0 00:01:57.177 node1 2048kB 0 / 0 00:01:57.177 00:01:57.177 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.177 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:57.177 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:57.177 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:57.178 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:57.178 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:57.178 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:57.178 + rm -f /tmp/spdk-ld-path 00:01:57.178 + source autorun-spdk.conf 00:01:57.178 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.178 ++ SPDK_TEST_NVMF=1 00:01:57.178 ++ SPDK_TEST_NVME_CLI=1 00:01:57.178 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.178 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.178 ++ SPDK_TEST_VFIOUSER=1 00:01:57.178 ++ SPDK_RUN_UBSAN=1 00:01:57.178 ++ NET_TYPE=phy 00:01:57.178 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.178 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.178 ++ RUN_NIGHTLY=1 00:01:57.178 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.178 + [[ -n '' ]] 00:01:57.178 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.178 + for M in /var/spdk/build-*-manifest.txt 00:01:57.178 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.178 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.178 + for M in /var/spdk/build-*-manifest.txt 00:01:57.178 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.178 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.178 + for M in /var/spdk/build-*-manifest.txt 00:01:57.178 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.178 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.178 ++ uname 00:01:57.178 + [[ Linux == \L\i\n\u\x ]] 00:01:57.178 + sudo dmesg -T 00:01:57.437 + sudo dmesg --clear 00:01:57.437 + dmesg_pid=1139180 00:01:57.437 + [[ Fedora Linux == FreeBSD ]] 00:01:57.437 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.437 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.437 + sudo dmesg -Tw 00:01:57.437 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.437 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.437 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.437 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.437 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.437 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.437 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.437 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.437 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.437 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.437 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.437 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.437 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.437 Test configuration: 00:01:57.437 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.437 SPDK_TEST_NVMF=1 00:01:57.437 SPDK_TEST_NVME_CLI=1 00:01:57.437 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.437 SPDK_TEST_NVMF_NICS=e810 00:01:57.437 SPDK_TEST_VFIOUSER=1 00:01:57.437 SPDK_RUN_UBSAN=1 00:01:57.437 NET_TYPE=phy 00:01:57.437 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.437 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.437 RUN_NIGHTLY=1 14:17:49 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:57.437 14:17:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.437 14:17:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:57.437 14:17:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.437 14:17:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.437 14:17:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.437 14:17:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.437 14:17:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.437 14:17:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.437 14:17:49 -- paths/export.sh@5 -- $ export PATH 00:01:57.437 14:17:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.437 14:17:49 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.437 14:17:49 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:57.437 14:17:49 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1730553469.XXXXXX 00:01:57.437 14:17:49 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1730553469.SjcP9f 00:01:57.437 14:17:49 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:57.437 14:17:49 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:01:57.437 14:17:49 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.437 14:17:49 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:57.437 14:17:49 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.437 14:17:49 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.437 14:17:49 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:57.437 14:17:49 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:57.437 14:17:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.437 14:17:49 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:57.437 14:17:49 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:57.437 14:17:49 -- pm/common@17 -- $ local monitor 00:01:57.437 14:17:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.437 14:17:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.437 14:17:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.437 14:17:49 -- pm/common@21 -- $ date +%s 00:01:57.437 14:17:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.437 14:17:49 -- pm/common@21 -- $ date +%s 00:01:57.437 14:17:49 -- pm/common@25 -- $ sleep 1 00:01:57.437 14:17:49 -- pm/common@21 -- $ date +%s 00:01:57.437 14:17:49 -- pm/common@21 -- $ date +%s 00:01:57.437 14:17:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730553469 00:01:57.437 14:17:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730553469 00:01:57.437 14:17:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730553469 00:01:57.437 14:17:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730553469 00:01:57.437 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730553469_collect-cpu-load.pm.log 00:01:57.437 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730553469_collect-vmstat.pm.log 00:01:57.437 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730553469_collect-cpu-temp.pm.log 00:01:57.437 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730553469_collect-bmc-pm.bmc.pm.log 00:01:58.377 14:17:50 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:58.377 14:17:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.377 14:17:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.377 14:17:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.377 14:17:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.377 Sat Nov 2 01:17:50 PM UTC 2024 00:01:58.377 14:17:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.377 v24.09-rc1-9-gb18e1bd62 00:01:58.377 14:17:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.377 14:17:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.377 14:17:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.377 14:17:50 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:58.377 14:17:50 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.377 14:17:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.377 ************************************ 00:01:58.377 START TEST ubsan 00:01:58.377 ************************************ 00:01:58.377 14:17:50 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:58.377 using ubsan 00:01:58.377 00:01:58.377 real 0m0.000s 00:01:58.377 user 0m0.000s 00:01:58.377 sys 0m0.000s 00:01:58.377 14:17:50 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:58.377 14:17:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.377 ************************************ 00:01:58.377 END TEST ubsan 00:01:58.377 ************************************ 00:01:58.377 14:17:50 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:58.377 14:17:50 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:58.377 14:17:50 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:58.377 14:17:50 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:58.377 14:17:50 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.377 14:17:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.636 ************************************ 00:01:58.636 START TEST build_native_dpdk 00:01:58.636 ************************************ 00:01:58.636 14:17:50 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:58.636 eeb0605f11 version: 23.11.0 00:01:58.636 238778122a doc: update release notes for 23.11 00:01:58.636 46aa6b3cfc doc: fix description of RSS features 00:01:58.636 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:58.636 7e421ae345 devtools: support skipping forbid rule check 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:58.636 14:17:50 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:58.636 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:58.637 patching file config/rte_config.h 00:01:58.637 Hunk #1 succeeded at 60 (offset 1 line). 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:58.637 patching file lib/pcapng/rte_pcapng.c 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:58.637 14:17:50 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:58.637 14:17:50 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:02.828 The Meson build system 00:02:02.828 Version: 1.5.0 00:02:02.828 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:02.828 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:02.828 Build type: native build 00:02:02.828 Program cat found: YES (/usr/bin/cat) 00:02:02.828 Project name: DPDK 00:02:02.828 Project version: 23.11.0 00:02:02.828 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.828 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:02.828 Host machine cpu family: x86_64 00:02:02.828 Host machine cpu: x86_64 00:02:02.828 Message: ## Building in Developer Mode ## 00:02:02.828 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.828 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:02.828 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.828 Program python3 found: YES (/usr/bin/python3) 00:02:02.828 Program cat found: YES (/usr/bin/cat) 00:02:02.828 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:02.828 Compiler for C supports arguments -march=native: YES 00:02:02.828 Checking for size of "void *" : 8 00:02:02.828 Checking for size of "void *" : 8 (cached) 00:02:02.828 Library m found: YES 00:02:02.828 Library numa found: YES 00:02:02.828 Has header "numaif.h" : YES 00:02:02.828 Library fdt found: NO 00:02:02.828 Library execinfo found: NO 00:02:02.828 Has header "execinfo.h" : YES 00:02:02.828 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.828 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.828 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.828 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.828 Run-time dependency openssl found: YES 3.1.1 00:02:02.828 Run-time dependency libpcap found: YES 1.10.4 00:02:02.828 Has header "pcap.h" with dependency libpcap: YES 00:02:02.828 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.828 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.828 Compiler for C supports arguments -Wformat: YES 00:02:02.828 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.828 Compiler for C supports arguments -Wformat-security: NO 00:02:02.828 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.828 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.828 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.828 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.828 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.829 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.829 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.829 Compiler for C supports arguments -Wundef: YES 00:02:02.829 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.829 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.829 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.829 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.829 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.829 Program objdump found: YES (/usr/bin/objdump) 00:02:02.829 Compiler for C supports arguments -mavx512f: YES 00:02:02.829 Checking if "AVX512 checking" compiles: YES 00:02:02.829 Fetching value of define "__SSE4_2__" : 1 00:02:02.829 Fetching value of define "__AES__" : 1 00:02:02.829 Fetching value of define "__AVX__" : 1 00:02:02.829 Fetching value of define "__AVX2__" : (undefined) 00:02:02.829 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.829 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.829 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.829 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.829 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.829 Fetching value of define "__PCLMUL__" : 1 00:02:02.829 Fetching value of define "__RDRND__" : 1 00:02:02.829 Fetching value of define "__RDSEED__" : (undefined) 00:02:02.829 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.829 Fetching value of define "__znver1__" : (undefined) 00:02:02.829 Fetching value of define "__znver2__" : (undefined) 00:02:02.829 Fetching value of define "__znver3__" : (undefined) 00:02:02.829 Fetching value of define "__znver4__" : (undefined) 00:02:02.829 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.829 Message: lib/log: Defining dependency "log" 00:02:02.829 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.829 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.829 Checking for function "getentropy" : NO 00:02:02.829 Message: lib/eal: Defining dependency "eal" 00:02:02.829 Message: lib/ring: Defining dependency "ring" 00:02:02.829 Message: lib/rcu: Defining dependency "rcu" 00:02:02.829 Message: lib/mempool: Defining dependency "mempool" 00:02:02.829 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.829 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.829 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.829 Compiler for C supports arguments -mpclmul: YES 00:02:02.829 Compiler for C supports arguments -maes: YES 00:02:02.829 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.829 Compiler for C supports arguments -mavx512bw: YES 00:02:02.829 Compiler for C supports arguments -mavx512dq: YES 00:02:02.829 Compiler for C supports arguments -mavx512vl: YES 00:02:02.829 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.829 Compiler for C supports arguments -mavx2: YES 00:02:02.829 Compiler for C supports arguments -mavx: YES 00:02:02.829 Message: lib/net: Defining dependency "net" 00:02:02.829 Message: lib/meter: Defining dependency "meter" 00:02:02.829 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.829 Message: lib/pci: Defining dependency "pci" 00:02:02.829 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.829 Message: lib/metrics: Defining dependency "metrics" 00:02:02.829 Message: lib/hash: Defining dependency "hash" 00:02:02.829 Message: lib/timer: Defining dependency "timer" 00:02:02.829 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:02.829 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:02.829 Message: lib/acl: Defining dependency "acl" 00:02:02.829 Message: lib/bbdev: Defining dependency "bbdev" 00:02:02.829 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:02.829 Run-time dependency libelf found: YES 0.191 00:02:02.829 Message: lib/bpf: Defining dependency "bpf" 00:02:02.829 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:02.829 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.829 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.829 Message: lib/distributor: Defining dependency "distributor" 00:02:02.829 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.829 Message: lib/efd: Defining dependency "efd" 00:02:02.829 Message: lib/eventdev: Defining dependency "eventdev" 00:02:02.829 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:02.829 Message: lib/gpudev: Defining dependency "gpudev" 00:02:02.829 Message: lib/gro: Defining dependency "gro" 00:02:02.829 Message: lib/gso: Defining dependency "gso" 00:02:02.829 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:02.829 Message: lib/jobstats: Defining dependency "jobstats" 00:02:02.829 Message: lib/latencystats: Defining dependency "latencystats" 00:02:02.829 Message: lib/lpm: Defining dependency "lpm" 00:02:02.829 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:02.829 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:02.829 Message: lib/member: Defining dependency "member" 00:02:02.829 Message: lib/pcapng: Defining dependency "pcapng" 00:02:02.829 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.829 Message: lib/power: Defining dependency "power" 00:02:02.829 Message: lib/rawdev: Defining dependency "rawdev" 00:02:02.829 Message: lib/regexdev: Defining dependency "regexdev" 00:02:02.829 Message: lib/mldev: Defining dependency "mldev" 00:02:02.829 Message: lib/rib: Defining dependency "rib" 00:02:02.829 Message: lib/reorder: Defining dependency "reorder" 00:02:02.829 Message: lib/sched: Defining dependency "sched" 00:02:02.829 Message: lib/security: Defining dependency "security" 00:02:02.829 Message: lib/stack: Defining dependency "stack" 00:02:02.829 Has header "linux/userfaultfd.h" : YES 00:02:02.829 Has header "linux/vduse.h" : YES 00:02:02.829 Message: lib/vhost: Defining dependency "vhost" 00:02:02.829 Message: lib/ipsec: Defining dependency "ipsec" 00:02:02.829 Message: lib/pdcp: Defining dependency "pdcp" 00:02:02.829 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.829 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:02.829 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:02.829 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:02.829 Message: lib/fib: Defining dependency "fib" 00:02:02.829 Message: lib/port: Defining dependency "port" 00:02:02.829 Message: lib/pdump: Defining dependency "pdump" 00:02:02.829 Message: lib/table: Defining dependency "table" 00:02:02.829 Message: lib/pipeline: Defining dependency "pipeline" 00:02:02.829 Message: lib/graph: Defining dependency "graph" 00:02:02.829 Message: lib/node: Defining dependency "node" 00:02:04.780 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.780 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.780 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.780 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.780 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:04.780 Compiler for C supports arguments -Wno-unused-value: YES 00:02:04.780 Compiler for C supports arguments -Wno-format: YES 00:02:04.780 Compiler for C supports arguments -Wno-format-security: YES 00:02:04.780 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:04.780 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:04.780 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:04.780 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:04.780 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.780 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.780 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:04.780 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:04.780 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:04.780 Has header "sys/epoll.h" : YES 00:02:04.780 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.780 Configuring doxy-api-html.conf using configuration 00:02:04.780 Configuring doxy-api-man.conf using configuration 00:02:04.780 Program mandb found: YES (/usr/bin/mandb) 00:02:04.780 Program sphinx-build found: NO 00:02:04.780 Configuring rte_build_config.h using configuration 00:02:04.780 Message: 00:02:04.780 ================= 00:02:04.780 Applications Enabled 00:02:04.780 ================= 00:02:04.780 00:02:04.780 apps: 00:02:04.780 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:04.780 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:04.780 test-pmd, test-regex, test-sad, test-security-perf, 00:02:04.780 00:02:04.780 Message: 00:02:04.780 ================= 00:02:04.780 Libraries Enabled 00:02:04.780 ================= 00:02:04.780 00:02:04.780 libs: 00:02:04.780 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.780 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:04.780 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:04.780 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:04.780 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:04.780 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:04.780 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:04.780 00:02:04.780 00:02:04.780 Message: 00:02:04.780 =============== 00:02:04.780 Drivers Enabled 00:02:04.780 =============== 00:02:04.780 00:02:04.780 common: 00:02:04.780 00:02:04.780 bus: 00:02:04.780 pci, vdev, 00:02:04.780 mempool: 00:02:04.780 ring, 00:02:04.780 dma: 00:02:04.780 00:02:04.780 net: 00:02:04.780 i40e, 00:02:04.780 raw: 00:02:04.780 00:02:04.780 crypto: 00:02:04.780 00:02:04.780 compress: 00:02:04.780 00:02:04.780 regex: 00:02:04.780 00:02:04.780 ml: 00:02:04.780 00:02:04.780 vdpa: 00:02:04.780 00:02:04.780 event: 00:02:04.780 00:02:04.780 baseband: 00:02:04.780 00:02:04.780 gpu: 00:02:04.780 00:02:04.780 00:02:04.780 Message: 00:02:04.780 ================= 00:02:04.780 Content Skipped 00:02:04.780 ================= 00:02:04.780 00:02:04.780 apps: 00:02:04.780 00:02:04.780 libs: 00:02:04.780 00:02:04.780 drivers: 00:02:04.780 common/cpt: not in enabled drivers build config 00:02:04.780 common/dpaax: not in enabled drivers build config 00:02:04.780 common/iavf: not in enabled drivers build config 00:02:04.780 common/idpf: not in enabled drivers build config 00:02:04.780 common/mvep: not in enabled drivers build config 00:02:04.780 common/octeontx: not in enabled drivers build config 00:02:04.780 bus/auxiliary: not in enabled drivers build config 00:02:04.780 bus/cdx: not in enabled drivers build config 00:02:04.780 bus/dpaa: not in enabled drivers build config 00:02:04.780 bus/fslmc: not in enabled drivers build config 00:02:04.780 bus/ifpga: not in enabled drivers build config 00:02:04.780 bus/platform: not in enabled drivers build config 00:02:04.780 bus/vmbus: not in enabled drivers build config 00:02:04.780 common/cnxk: not in enabled drivers build config 00:02:04.780 common/mlx5: not in enabled drivers build config 00:02:04.780 common/nfp: not in enabled drivers build config 00:02:04.780 common/qat: not in enabled drivers build config 00:02:04.780 common/sfc_efx: not in enabled drivers build config 00:02:04.780 mempool/bucket: not in enabled drivers build config 00:02:04.780 mempool/cnxk: not in enabled drivers build config 00:02:04.780 mempool/dpaa: not in enabled drivers build config 00:02:04.780 mempool/dpaa2: not in enabled drivers build config 00:02:04.780 mempool/octeontx: not in enabled drivers build config 00:02:04.780 mempool/stack: not in enabled drivers build config 00:02:04.780 dma/cnxk: not in enabled drivers build config 00:02:04.780 dma/dpaa: not in enabled drivers build config 00:02:04.780 dma/dpaa2: not in enabled drivers build config 00:02:04.780 dma/hisilicon: not in enabled drivers build config 00:02:04.780 dma/idxd: not in enabled drivers build config 00:02:04.780 dma/ioat: not in enabled drivers build config 00:02:04.780 dma/skeleton: not in enabled drivers build config 00:02:04.780 net/af_packet: not in enabled drivers build config 00:02:04.780 net/af_xdp: not in enabled drivers build config 00:02:04.780 net/ark: not in enabled drivers build config 00:02:04.780 net/atlantic: not in enabled drivers build config 00:02:04.780 net/avp: not in enabled drivers build config 00:02:04.780 net/axgbe: not in enabled drivers build config 00:02:04.780 net/bnx2x: not in enabled drivers build config 00:02:04.780 net/bnxt: not in enabled drivers build config 00:02:04.780 net/bonding: not in enabled drivers build config 00:02:04.780 net/cnxk: not in enabled drivers build config 00:02:04.780 net/cpfl: not in enabled drivers build config 00:02:04.780 net/cxgbe: not in enabled drivers build config 00:02:04.780 net/dpaa: not in enabled drivers build config 00:02:04.780 net/dpaa2: not in enabled drivers build config 00:02:04.780 net/e1000: not in enabled drivers build config 00:02:04.780 net/ena: not in enabled drivers build config 00:02:04.780 net/enetc: not in enabled drivers build config 00:02:04.780 net/enetfec: not in enabled drivers build config 00:02:04.780 net/enic: not in enabled drivers build config 00:02:04.780 net/failsafe: not in enabled drivers build config 00:02:04.780 net/fm10k: not in enabled drivers build config 00:02:04.780 net/gve: not in enabled drivers build config 00:02:04.780 net/hinic: not in enabled drivers build config 00:02:04.780 net/hns3: not in enabled drivers build config 00:02:04.780 net/iavf: not in enabled drivers build config 00:02:04.780 net/ice: not in enabled drivers build config 00:02:04.780 net/idpf: not in enabled drivers build config 00:02:04.780 net/igc: not in enabled drivers build config 00:02:04.780 net/ionic: not in enabled drivers build config 00:02:04.780 net/ipn3ke: not in enabled drivers build config 00:02:04.780 net/ixgbe: not in enabled drivers build config 00:02:04.780 net/mana: not in enabled drivers build config 00:02:04.780 net/memif: not in enabled drivers build config 00:02:04.780 net/mlx4: not in enabled drivers build config 00:02:04.780 net/mlx5: not in enabled drivers build config 00:02:04.780 net/mvneta: not in enabled drivers build config 00:02:04.780 net/mvpp2: not in enabled drivers build config 00:02:04.780 net/netvsc: not in enabled drivers build config 00:02:04.780 net/nfb: not in enabled drivers build config 00:02:04.780 net/nfp: not in enabled drivers build config 00:02:04.780 net/ngbe: not in enabled drivers build config 00:02:04.780 net/null: not in enabled drivers build config 00:02:04.780 net/octeontx: not in enabled drivers build config 00:02:04.780 net/octeon_ep: not in enabled drivers build config 00:02:04.780 net/pcap: not in enabled drivers build config 00:02:04.780 net/pfe: not in enabled drivers build config 00:02:04.780 net/qede: not in enabled drivers build config 00:02:04.780 net/ring: not in enabled drivers build config 00:02:04.780 net/sfc: not in enabled drivers build config 00:02:04.780 net/softnic: not in enabled drivers build config 00:02:04.780 net/tap: not in enabled drivers build config 00:02:04.780 net/thunderx: not in enabled drivers build config 00:02:04.781 net/txgbe: not in enabled drivers build config 00:02:04.781 net/vdev_netvsc: not in enabled drivers build config 00:02:04.781 net/vhost: not in enabled drivers build config 00:02:04.781 net/virtio: not in enabled drivers build config 00:02:04.781 net/vmxnet3: not in enabled drivers build config 00:02:04.781 raw/cnxk_bphy: not in enabled drivers build config 00:02:04.781 raw/cnxk_gpio: not in enabled drivers build config 00:02:04.781 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:04.781 raw/ifpga: not in enabled drivers build config 00:02:04.781 raw/ntb: not in enabled drivers build config 00:02:04.781 raw/skeleton: not in enabled drivers build config 00:02:04.781 crypto/armv8: not in enabled drivers build config 00:02:04.781 crypto/bcmfs: not in enabled drivers build config 00:02:04.781 crypto/caam_jr: not in enabled drivers build config 00:02:04.781 crypto/ccp: not in enabled drivers build config 00:02:04.781 crypto/cnxk: not in enabled drivers build config 00:02:04.781 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.781 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.781 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.781 crypto/mlx5: not in enabled drivers build config 00:02:04.781 crypto/mvsam: not in enabled drivers build config 00:02:04.781 crypto/nitrox: not in enabled drivers build config 00:02:04.781 crypto/null: not in enabled drivers build config 00:02:04.781 crypto/octeontx: not in enabled drivers build config 00:02:04.781 crypto/openssl: not in enabled drivers build config 00:02:04.781 crypto/scheduler: not in enabled drivers build config 00:02:04.781 crypto/uadk: not in enabled drivers build config 00:02:04.781 crypto/virtio: not in enabled drivers build config 00:02:04.781 compress/isal: not in enabled drivers build config 00:02:04.781 compress/mlx5: not in enabled drivers build config 00:02:04.781 compress/octeontx: not in enabled drivers build config 00:02:04.781 compress/zlib: not in enabled drivers build config 00:02:04.781 regex/mlx5: not in enabled drivers build config 00:02:04.781 regex/cn9k: not in enabled drivers build config 00:02:04.781 ml/cnxk: not in enabled drivers build config 00:02:04.781 vdpa/ifc: not in enabled drivers build config 00:02:04.781 vdpa/mlx5: not in enabled drivers build config 00:02:04.781 vdpa/nfp: not in enabled drivers build config 00:02:04.781 vdpa/sfc: not in enabled drivers build config 00:02:04.781 event/cnxk: not in enabled drivers build config 00:02:04.781 event/dlb2: not in enabled drivers build config 00:02:04.781 event/dpaa: not in enabled drivers build config 00:02:04.781 event/dpaa2: not in enabled drivers build config 00:02:04.781 event/dsw: not in enabled drivers build config 00:02:04.781 event/opdl: not in enabled drivers build config 00:02:04.781 event/skeleton: not in enabled drivers build config 00:02:04.781 event/sw: not in enabled drivers build config 00:02:04.781 event/octeontx: not in enabled drivers build config 00:02:04.781 baseband/acc: not in enabled drivers build config 00:02:04.781 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:04.781 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:04.781 baseband/la12xx: not in enabled drivers build config 00:02:04.781 baseband/null: not in enabled drivers build config 00:02:04.781 baseband/turbo_sw: not in enabled drivers build config 00:02:04.781 gpu/cuda: not in enabled drivers build config 00:02:04.781 00:02:04.781 00:02:04.781 Build targets in project: 220 00:02:04.781 00:02:04.781 DPDK 23.11.0 00:02:04.781 00:02:04.781 User defined options 00:02:04.781 libdir : lib 00:02:04.781 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:04.781 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:04.781 c_link_args : 00:02:04.781 enable_docs : false 00:02:04.781 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:04.781 enable_kmods : false 00:02:04.781 machine : native 00:02:04.781 tests : false 00:02:04.781 00:02:04.781 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.781 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:05.047 14:17:56 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:05.047 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:05.047 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.047 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.047 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.047 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.047 [5/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.047 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.047 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.048 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.312 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.312 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.312 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.312 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.312 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.312 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.312 [15/710] Linking static target lib/librte_kvargs.a 00:02:05.312 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.312 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.312 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.312 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.312 [20/710] Linking static target lib/librte_log.a 00:02:05.572 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.572 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.149 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.149 [24/710] Linking target lib/librte_log.so.24.0 00:02:06.149 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.149 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.149 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.149 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.149 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.149 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.149 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.149 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.149 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.149 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.149 [35/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.149 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.149 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.149 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.412 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.412 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.412 [41/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.412 [42/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.412 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.412 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.412 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.412 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.412 [47/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:06.412 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.412 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.412 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.412 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.412 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.412 [53/710] Linking target lib/librte_kvargs.so.24.0 00:02:06.412 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.412 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.412 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.412 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.412 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.412 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.412 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.412 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.412 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.680 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.680 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.680 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:06.680 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.940 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.941 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.941 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.941 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.941 [71/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.941 [72/710] Linking static target lib/librte_pci.a 00:02:07.203 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.203 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.203 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.203 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.203 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.203 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.203 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.203 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.463 [81/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.463 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.463 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.463 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.464 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.464 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.464 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.464 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.464 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.464 [90/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.464 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.464 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.464 [93/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.464 [94/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.464 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.464 [96/710] Linking static target lib/librte_ring.a 00:02:07.727 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.727 [98/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.727 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.727 [100/710] Linking static target lib/librte_meter.a 00:02:07.727 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.727 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.727 [103/710] Linking static target lib/librte_telemetry.a 00:02:07.727 [104/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.727 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.727 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.727 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.727 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.727 [109/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.727 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.987 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.987 [112/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.987 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.987 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.987 [115/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.987 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.987 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.987 [118/710] Linking static target lib/librte_eal.a 00:02:07.987 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.987 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.250 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.250 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.250 [123/710] Linking static target lib/librte_net.a 00:02:08.250 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.250 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.250 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.250 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.515 [128/710] Linking static target lib/librte_mempool.a 00:02:08.515 [129/710] Linking static target lib/librte_cmdline.a 00:02:08.515 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.515 [131/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.515 [132/710] Linking target lib/librte_telemetry.so.24.0 00:02:08.516 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.516 [134/710] Linking static target lib/librte_cfgfile.a 00:02:08.516 [135/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.516 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:08.516 [137/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.781 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:08.781 [139/710] Linking static target lib/librte_metrics.a 00:02:08.781 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:08.781 [141/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:08.781 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.781 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.781 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:09.047 [145/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.047 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:09.047 [147/710] Linking static target lib/librte_rcu.a 00:02:09.047 [148/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:09.047 [149/710] Linking static target lib/librte_bitratestats.a 00:02:09.047 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:09.047 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.047 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:09.047 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.047 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.315 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:09.315 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.315 [157/710] Linking static target lib/librte_timer.a 00:02:09.315 [158/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.315 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.315 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:09.315 [161/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.315 [162/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.579 [163/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.579 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.579 [165/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.579 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.579 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.579 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:09.579 [169/710] Linking static target lib/librte_bbdev.a 00:02:09.579 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.845 [171/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.845 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.845 [173/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.845 [174/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.845 [175/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:09.845 [176/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:09.845 [177/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.845 [178/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.105 [179/710] Linking static target lib/librte_compressdev.a 00:02:10.105 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:10.105 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:10.367 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.367 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:10.367 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.367 [185/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:10.367 [186/710] Linking static target lib/librte_distributor.a 00:02:10.629 [187/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.629 [188/710] Linking static target lib/librte_dmadev.a 00:02:10.629 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:10.629 [190/710] Linking static target lib/librte_bpf.a 00:02:10.629 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.629 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:10.902 [193/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:10.902 [194/710] Linking static target lib/librte_dispatcher.a 00:02:10.902 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:10.902 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:10.902 [197/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.902 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:10.902 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:10.902 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:10.902 [201/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.902 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:10.902 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:10.902 [204/710] Linking static target lib/librte_gpudev.a 00:02:11.165 [205/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:11.165 [206/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:11.165 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.165 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.165 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:11.165 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:11.165 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.165 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:11.165 [213/710] Linking static target lib/librte_gro.a 00:02:11.165 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.165 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:11.165 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.165 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:11.165 [218/710] Linking static target lib/librte_jobstats.a 00:02:11.457 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:11.457 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:11.457 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:11.457 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.457 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.740 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:11.740 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:11.740 [226/710] Linking static target lib/librte_latencystats.a 00:02:11.740 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.740 [228/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:11.740 [229/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:11.741 [230/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:12.022 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:12.022 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:12.022 [233/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:12.022 [234/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:12.022 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:12.022 [236/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:12.022 [237/710] Linking static target lib/librte_ip_frag.a 00:02:12.291 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.291 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:12.291 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.291 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.552 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:12.552 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.552 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.552 [245/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:12.552 [246/710] Linking static target lib/librte_gso.a 00:02:12.553 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:12.553 [248/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.553 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:12.823 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:12.823 [251/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:12.823 [252/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:12.823 [253/710] Linking static target lib/librte_regexdev.a 00:02:12.823 [254/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.823 [255/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:12.823 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:12.823 [257/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:12.823 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.823 [259/710] Linking static target lib/librte_rawdev.a 00:02:13.087 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:13.087 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:13.087 [262/710] Linking static target lib/librte_mldev.a 00:02:13.087 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:13.087 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:13.087 [265/710] Linking static target lib/librte_efd.a 00:02:13.087 [266/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:13.087 [267/710] Linking static target lib/acl/libavx2_tmp.a 00:02:13.087 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:13.087 [269/710] Linking static target lib/librte_pcapng.a 00:02:13.087 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:13.087 [271/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:13.348 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:13.349 [273/710] Linking static target lib/librte_stack.a 00:02:13.349 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:13.349 [275/710] Linking static target lib/librte_lpm.a 00:02:13.349 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:13.349 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.614 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.614 [279/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.614 [280/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.614 [281/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:13.614 [282/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.614 [283/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.614 [284/710] Linking static target lib/librte_hash.a 00:02:13.614 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.614 [286/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.614 [287/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.614 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.614 [289/710] Linking static target lib/librte_reorder.a 00:02:13.880 [290/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:13.880 [291/710] Linking static target lib/acl/libavx512_tmp.a 00:02:13.880 [292/710] Linking static target lib/librte_acl.a 00:02:13.880 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.880 [294/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.880 [295/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.880 [296/710] Linking static target lib/librte_power.a 00:02:13.880 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.880 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.880 [299/710] Linking static target lib/librte_security.a 00:02:14.147 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.147 [301/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.147 [302/710] Linking static target lib/librte_mbuf.a 00:02:14.147 [303/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:14.147 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.147 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.147 [306/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.147 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.147 [308/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:14.410 [309/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.410 [310/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:14.410 [311/710] Linking static target lib/librte_rib.a 00:02:14.410 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:14.411 [313/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.411 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:14.411 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:14.411 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:14.411 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:14.679 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.679 [319/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.679 [320/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:14.679 [321/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:14.679 [322/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:14.679 [323/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:14.679 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:14.679 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:14.939 [326/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:14.939 [327/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.939 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:14.939 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.939 [330/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.939 [331/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.202 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:15.466 [333/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:15.466 [334/710] Linking static target lib/librte_eventdev.a 00:02:15.466 [335/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:15.466 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:15.466 [337/710] Linking static target lib/librte_member.a 00:02:15.466 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:15.729 [339/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.729 [340/710] Linking static target lib/librte_cryptodev.a 00:02:15.729 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.729 [342/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:15.729 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:15.991 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:15.991 [345/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:15.992 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:15.992 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:15.992 [348/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.992 [349/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.992 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:15.992 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:15.992 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:15.992 [353/710] Linking static target lib/librte_ethdev.a 00:02:15.992 [354/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:15.992 [355/710] Linking static target lib/librte_sched.a 00:02:15.992 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:15.992 [357/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:16.254 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:16.254 [359/710] Linking static target lib/librte_fib.a 00:02:16.254 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:16.254 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:16.254 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:16.254 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:16.518 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:16.518 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:16.518 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:16.518 [367/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:16.518 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.518 [369/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.518 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:16.782 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:16.782 [372/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.782 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:17.044 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:17.044 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:17.044 [376/710] Linking static target lib/librte_pdump.a 00:02:17.044 [377/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:17.044 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:17.044 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.307 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:17.308 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:17.308 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:17.308 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:17.308 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:17.308 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:17.308 [386/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:17.308 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.308 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:17.568 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.568 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:17.568 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:17.568 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:17.568 [393/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:17.568 [394/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:17.568 [395/710] Linking static target lib/librte_table.a 00:02:17.568 [396/710] Linking static target lib/librte_ipsec.a 00:02:17.838 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.838 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:17.838 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:18.103 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:18.103 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:18.364 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.364 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:18.364 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.626 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:18.626 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:18.626 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.626 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.626 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:18.626 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.626 [411/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:18.889 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.889 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:18.889 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.889 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.889 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.153 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:19.153 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:19.153 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:19.153 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.153 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.153 [422/710] Linking static target drivers/librte_bus_vdev.a 00:02:19.153 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:19.153 [424/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.414 [425/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:19.414 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:19.414 [427/710] Linking static target lib/librte_port.a 00:02:19.414 [428/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:19.414 [429/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:19.679 [430/710] Linking static target lib/librte_graph.a 00:02:19.679 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.679 [432/710] Linking static target drivers/librte_bus_pci.a 00:02:19.679 [433/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:19.679 [434/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.679 [435/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:19.679 [436/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.679 [437/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.679 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:19.679 [439/710] Linking target lib/librte_eal.so.24.0 00:02:19.951 [440/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:19.951 [441/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.951 [442/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.951 [443/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.951 [444/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:20.220 [445/710] Linking target lib/librte_ring.so.24.0 00:02:20.220 [446/710] Linking target lib/librte_meter.so.24.0 00:02:20.220 [447/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:20.220 [448/710] Linking target lib/librte_pci.so.24.0 00:02:20.220 [449/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:20.220 [450/710] Linking target lib/librte_timer.so.24.0 00:02:20.483 [451/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:20.483 [452/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:20.483 [453/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.483 [454/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:20.483 [455/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:20.483 [456/710] Linking target lib/librte_rcu.so.24.0 00:02:20.483 [457/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:20.483 [458/710] Linking target lib/librte_acl.so.24.0 00:02:20.483 [459/710] Linking target lib/librte_mempool.so.24.0 00:02:20.483 [460/710] Linking target lib/librte_cfgfile.so.24.0 00:02:20.483 [461/710] Linking target lib/librte_dmadev.so.24.0 00:02:20.483 [462/710] Linking target lib/librte_jobstats.so.24.0 00:02:20.483 [463/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:20.483 [464/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.483 [465/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.483 [466/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:20.483 [467/710] Linking target lib/librte_rawdev.so.24.0 00:02:20.483 [468/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.483 [469/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:20.484 [470/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.484 [471/710] Linking static target drivers/librte_mempool_ring.a 00:02:20.484 [472/710] Linking target lib/librte_stack.so.24.0 00:02:20.484 [473/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.748 [474/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:20.748 [475/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:20.748 [476/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:20.748 [477/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:20.748 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:20.748 [479/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:20.748 [480/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.748 [481/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:20.748 [482/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.748 [483/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:20.748 [484/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:20.748 [485/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.748 [486/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:20.748 [487/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:20.748 [488/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:20.748 [489/710] Linking target lib/librte_mbuf.so.24.0 00:02:20.748 [490/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:20.748 [491/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:21.011 [492/710] Linking target lib/librte_rib.so.24.0 00:02:21.011 [493/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:21.011 [494/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:21.011 [495/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:21.011 [496/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:21.011 [497/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.011 [498/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:21.011 [499/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:21.273 [500/710] Linking target lib/librte_net.so.24.0 00:02:21.273 [501/710] Linking target lib/librte_bbdev.so.24.0 00:02:21.273 [502/710] Linking target lib/librte_compressdev.so.24.0 00:02:21.273 [503/710] Linking target lib/librte_cryptodev.so.24.0 00:02:21.273 [504/710] Linking target lib/librte_distributor.so.24.0 00:02:21.273 [505/710] Linking target lib/librte_gpudev.so.24.0 00:02:21.273 [506/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:21.534 [507/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.534 [508/710] Linking target lib/librte_mldev.so.24.0 00:02:21.534 [509/710] Linking target lib/librte_regexdev.so.24.0 00:02:21.534 [510/710] Linking target lib/librte_reorder.so.24.0 00:02:21.534 [511/710] Linking target lib/librte_hash.so.24.0 00:02:21.534 [512/710] Linking target lib/librte_cmdline.so.24.0 00:02:21.534 [513/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:21.534 [514/710] Linking target lib/librte_sched.so.24.0 00:02:21.534 [515/710] Linking target lib/librte_security.so.24.0 00:02:21.534 [516/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:21.534 [517/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:21.796 [518/710] Linking target lib/librte_fib.so.24.0 00:02:21.796 [519/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:21.796 [520/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.796 [521/710] Linking target lib/librte_efd.so.24.0 00:02:21.796 [522/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:21.796 [523/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:21.796 [524/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:21.796 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:21.796 [526/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:21.796 [527/710] Linking target lib/librte_lpm.so.24.0 00:02:21.796 [528/710] Linking target lib/librte_member.so.24.0 00:02:22.059 [529/710] Linking target lib/librte_ipsec.so.24.0 00:02:22.059 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:22.059 [531/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:22.059 [532/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:22.059 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:22.059 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:22.323 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:22.323 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:22.323 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:22.323 [538/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:22.323 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:22.323 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:22.323 [541/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:22.585 [542/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:22.585 [543/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:22.585 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:22.585 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:22.849 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:22.849 [547/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:22.849 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:22.849 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:22.849 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:23.111 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:23.111 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:23.374 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:23.374 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:23.374 [555/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:23.374 [556/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:23.374 [557/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:23.637 [558/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:23.637 [559/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:23.899 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:23.899 [561/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:24.163 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:24.163 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:24.163 [564/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:24.163 [565/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:24.426 [566/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:24.426 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:24.426 [568/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:24.426 [569/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:24.685 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:24.685 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:24.685 [572/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.685 [573/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:24.685 [574/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:24.685 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:24.685 [576/710] Linking target lib/librte_ethdev.so.24.0 00:02:24.685 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:24.953 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:24.953 [579/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:24.953 [580/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:24.953 [581/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:24.953 [582/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:24.953 [583/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:24.953 [584/710] Linking target lib/librte_metrics.so.24.0 00:02:25.211 [585/710] Linking target lib/librte_bpf.so.24.0 00:02:25.211 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:25.211 [587/710] Linking target lib/librte_eventdev.so.24.0 00:02:25.212 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:25.212 [589/710] Linking target lib/librte_gro.so.24.0 00:02:25.212 [590/710] Linking target lib/librte_gso.so.24.0 00:02:25.212 [591/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:25.212 [592/710] Linking target lib/librte_ip_frag.so.24.0 00:02:25.471 [593/710] Linking target lib/librte_pcapng.so.24.0 00:02:25.471 [594/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:25.471 [595/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:25.471 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:25.471 [597/710] Linking static target lib/librte_pdcp.a 00:02:25.471 [598/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:25.471 [599/710] Linking target lib/librte_bitratestats.so.24.0 00:02:25.471 [600/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:25.471 [601/710] Linking target lib/librte_latencystats.so.24.0 00:02:25.471 [602/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:25.471 [603/710] Linking target lib/librte_power.so.24.0 00:02:25.472 [604/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:25.472 [605/710] Linking target lib/librte_dispatcher.so.24.0 00:02:25.472 [606/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:25.472 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:25.736 [608/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:25.736 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:25.736 [610/710] Linking target lib/librte_port.so.24.0 00:02:25.736 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:25.736 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:25.736 [613/710] Linking target lib/librte_pdump.so.24.0 00:02:25.736 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:25.736 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:25.736 [616/710] Linking target lib/librte_graph.so.24.0 00:02:25.996 [617/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:25.996 [618/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:25.997 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:25.997 [620/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.997 [621/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:25.997 [622/710] Linking target lib/librte_table.so.24.0 00:02:25.997 [623/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:25.997 [624/710] Linking target lib/librte_pdcp.so.24.0 00:02:25.997 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:26.257 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:26.257 [627/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:26.257 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:26.257 [629/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:26.257 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.257 [631/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:26.824 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:26.824 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:26.824 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:26.824 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:27.083 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:27.083 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:27.083 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:27.083 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:27.083 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:27.083 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:27.343 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:27.343 [643/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:27.343 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:27.343 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:27.603 [646/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:27.603 [647/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:27.603 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:27.603 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:27.603 [650/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:27.861 [651/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:27.861 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:27.861 [653/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:28.119 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:28.119 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:28.119 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:28.377 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:28.377 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:28.377 [659/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:28.636 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:28.636 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.636 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.636 [663/710] Linking static target drivers/librte_net_i40e.a 00:02:28.636 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:28.894 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:28.894 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:28.894 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:29.153 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.153 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:29.153 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:29.411 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:29.670 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:29.670 [673/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:29.670 [674/710] Linking static target lib/librte_node.a 00:02:29.928 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.186 [676/710] Linking target lib/librte_node.so.24.0 00:02:31.120 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:31.378 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:31.378 [679/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:32.752 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:33.687 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:38.952 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.046 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.046 [684/710] Linking static target lib/librte_vhost.a 00:03:11.046 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.046 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:21.043 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:21.043 [688/710] Linking static target lib/librte_pipeline.a 00:03:21.043 [689/710] Linking target app/dpdk-dumpcap 00:03:21.043 [690/710] Linking target app/dpdk-pdump 00:03:21.043 [691/710] Linking target app/dpdk-test-dma-perf 00:03:21.043 [692/710] Linking target app/dpdk-test-acl 00:03:21.043 [693/710] Linking target app/dpdk-graph 00:03:21.043 [694/710] Linking target app/dpdk-test-bbdev 00:03:21.043 [695/710] Linking target app/dpdk-test-mldev 00:03:21.043 [696/710] Linking target app/dpdk-test-eventdev 00:03:21.043 [697/710] Linking target app/dpdk-test-compress-perf 00:03:21.043 [698/710] Linking target app/dpdk-test-cmdline 00:03:21.043 [699/710] Linking target app/dpdk-proc-info 00:03:21.043 [700/710] Linking target app/dpdk-test-regex 00:03:21.043 [701/710] Linking target app/dpdk-test-pipeline 00:03:21.043 [702/710] Linking target app/dpdk-test-sad 00:03:21.043 [703/710] Linking target app/dpdk-test-fib 00:03:21.043 [704/710] Linking target app/dpdk-test-flow-perf 00:03:21.043 [705/710] Linking target app/dpdk-test-gpudev 00:03:21.043 [706/710] Linking target app/dpdk-test-security-perf 00:03:21.043 [707/710] Linking target app/dpdk-test-crypto-perf 00:03:21.043 [708/710] Linking target app/dpdk-testpmd 00:03:22.945 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.204 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:23.204 14:19:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:23.204 14:19:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:23.204 14:19:15 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:23.204 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:23.204 [0/1] Installing files. 00:03:23.467 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:23.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:23.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:23.472 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.732 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.733 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.305 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.305 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.305 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.305 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.305 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.309 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:24.309 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:24.309 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:24.309 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:24.309 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:24.309 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:24.309 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:24.309 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:24.309 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:24.309 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:24.309 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:24.309 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:24.309 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:24.309 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:24.309 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:24.309 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:24.309 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:24.309 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:24.310 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:24.310 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:24.310 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:24.310 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:24.310 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:24.310 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:24.310 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:24.310 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:24.310 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:24.310 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:24.310 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:24.310 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:24.310 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:24.310 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:24.310 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:24.310 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:24.310 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:24.310 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:24.310 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:24.310 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:24.310 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:24.310 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:24.310 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:24.310 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:24.310 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:24.310 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:24.310 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:24.310 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:24.310 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:24.310 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:24.310 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:24.310 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:24.310 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:24.310 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:24.310 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:24.310 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:24.310 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:24.310 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:24.310 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:24.310 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:24.310 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:24.310 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:24.310 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:24.310 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:24.310 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:24.310 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:24.310 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:24.310 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:24.310 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:24.310 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:24.310 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:24.310 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:24.310 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:24.310 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:24.310 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:24.310 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:24.310 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:24.310 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:24.310 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:24.310 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:24.310 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:24.310 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:24.310 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:24.310 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:24.310 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:24.310 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:24.310 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:24.310 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:24.310 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:24.310 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:24.310 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:24.310 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:24.310 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:24.310 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:24.310 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:24.310 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:24.310 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:24.310 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:24.310 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:24.310 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:24.310 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:24.310 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:24.310 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:24.310 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:24.310 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:24.310 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:24.310 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:24.310 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:24.310 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:24.310 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:24.310 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:24.310 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:24.310 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:24.310 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:24.310 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:24.310 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:24.310 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:24.310 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:24.311 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:24.311 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:24.311 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:24.311 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:24.311 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:24.311 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:24.311 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:24.311 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:24.311 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:24.311 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:24.311 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:24.311 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:24.311 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:24.311 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:24.311 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:24.311 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:24.311 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:24.311 14:19:16 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:24.311 14:19:16 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:24.311 00:03:24.311 real 1m25.884s 00:03:24.311 user 18m2.979s 00:03:24.311 sys 2m10.054s 00:03:24.311 14:19:16 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:24.311 14:19:16 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:24.311 ************************************ 00:03:24.311 END TEST build_native_dpdk 00:03:24.311 ************************************ 00:03:24.311 14:19:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.311 14:19:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.311 14:19:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:24.569 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:24.569 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.569 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.569 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:24.829 Using 'verbs' RDMA provider 00:03:35.372 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:45.351 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:45.351 Creating mk/config.mk...done. 00:03:45.351 Creating mk/cc.flags.mk...done. 00:03:45.351 Type 'make' to build. 00:03:45.351 14:19:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:45.351 14:19:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.351 14:19:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.351 14:19:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.351 ************************************ 00:03:45.351 START TEST make 00:03:45.351 ************************************ 00:03:45.351 14:19:36 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:45.351 make[1]: Nothing to be done for 'all'. 00:03:46.743 The Meson build system 00:03:46.743 Version: 1.5.0 00:03:46.743 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:46.743 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:46.743 Build type: native build 00:03:46.743 Project name: libvfio-user 00:03:46.743 Project version: 0.0.1 00:03:46.743 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:46.743 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:46.743 Host machine cpu family: x86_64 00:03:46.743 Host machine cpu: x86_64 00:03:46.743 Run-time dependency threads found: YES 00:03:46.743 Library dl found: YES 00:03:46.743 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:46.743 Run-time dependency json-c found: YES 0.17 00:03:46.743 Run-time dependency cmocka found: YES 1.1.7 00:03:46.743 Program pytest-3 found: NO 00:03:46.743 Program flake8 found: NO 00:03:46.743 Program misspell-fixer found: NO 00:03:46.743 Program restructuredtext-lint found: NO 00:03:46.743 Program valgrind found: YES (/usr/bin/valgrind) 00:03:46.743 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:46.743 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:46.743 Compiler for C supports arguments -Wwrite-strings: YES 00:03:46.743 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:46.743 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:46.743 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:46.743 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:46.743 Build targets in project: 8 00:03:46.743 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:46.743 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:46.743 00:03:46.743 libvfio-user 0.0.1 00:03:46.743 00:03:46.743 User defined options 00:03:46.743 buildtype : debug 00:03:46.743 default_library: shared 00:03:46.743 libdir : /usr/local/lib 00:03:46.743 00:03:46.743 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:47.688 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:47.688 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:47.688 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:47.688 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:47.688 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:47.688 [5/37] Compiling C object samples/null.p/null.c.o 00:03:47.949 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:47.949 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:47.949 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:47.949 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:47.949 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:47.949 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:47.949 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:47.949 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:47.949 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:47.949 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:47.949 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:47.949 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:47.949 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:47.949 [19/37] Compiling C object samples/server.p/server.c.o 00:03:47.949 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:47.949 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:47.949 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:47.949 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:47.949 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:47.949 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:47.949 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:47.949 [27/37] Compiling C object samples/client.p/client.c.o 00:03:47.949 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:47.949 [29/37] Linking target samples/client 00:03:48.208 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:48.208 [31/37] Linking target test/unit_tests 00:03:48.208 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:48.208 [33/37] Linking target samples/server 00:03:48.208 [34/37] Linking target samples/null 00:03:48.208 [35/37] Linking target samples/lspci 00:03:48.208 [36/37] Linking target samples/gpio-pci-idio-16 00:03:48.208 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:48.469 INFO: autodetecting backend as ninja 00:03:48.469 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:48.469 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.415 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.415 ninja: no work to do. 00:04:28.114 CC lib/ut_mock/mock.o 00:04:28.114 CC lib/log/log.o 00:04:28.114 CC lib/log/log_flags.o 00:04:28.114 CC lib/log/log_deprecated.o 00:04:28.114 CC lib/ut/ut.o 00:04:28.114 LIB libspdk_log.a 00:04:28.114 LIB libspdk_ut.a 00:04:28.114 LIB libspdk_ut_mock.a 00:04:28.114 SO libspdk_ut.so.2.0 00:04:28.114 SO libspdk_log.so.7.0 00:04:28.114 SO libspdk_ut_mock.so.6.0 00:04:28.114 SYMLINK libspdk_ut.so 00:04:28.114 SYMLINK libspdk_ut_mock.so 00:04:28.114 SYMLINK libspdk_log.so 00:04:28.114 CXX lib/trace_parser/trace.o 00:04:28.114 CC lib/dma/dma.o 00:04:28.114 CC lib/ioat/ioat.o 00:04:28.114 CC lib/util/base64.o 00:04:28.114 CC lib/util/bit_array.o 00:04:28.114 CC lib/util/cpuset.o 00:04:28.114 CC lib/util/crc16.o 00:04:28.114 CC lib/util/crc32.o 00:04:28.114 CC lib/util/crc32c.o 00:04:28.114 CC lib/util/crc32_ieee.o 00:04:28.114 CC lib/util/crc64.o 00:04:28.114 CC lib/util/dif.o 00:04:28.114 CC lib/util/fd.o 00:04:28.114 CC lib/util/fd_group.o 00:04:28.114 CC lib/util/file.o 00:04:28.114 CC lib/util/hexlify.o 00:04:28.114 CC lib/util/iov.o 00:04:28.114 CC lib/util/math.o 00:04:28.114 CC lib/util/net.o 00:04:28.114 CC lib/util/pipe.o 00:04:28.114 CC lib/util/strerror_tls.o 00:04:28.114 CC lib/util/string.o 00:04:28.114 CC lib/util/uuid.o 00:04:28.114 CC lib/util/zipf.o 00:04:28.114 CC lib/util/xor.o 00:04:28.114 CC lib/util/md5.o 00:04:28.114 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.114 CC lib/vfio_user/host/vfio_user.o 00:04:28.114 LIB libspdk_dma.a 00:04:28.114 SO libspdk_dma.so.5.0 00:04:28.114 LIB libspdk_ioat.a 00:04:28.114 SO libspdk_ioat.so.7.0 00:04:28.114 SYMLINK libspdk_dma.so 00:04:28.114 SYMLINK libspdk_ioat.so 00:04:28.114 LIB libspdk_vfio_user.a 00:04:28.114 SO libspdk_vfio_user.so.5.0 00:04:28.114 SYMLINK libspdk_vfio_user.so 00:04:28.114 LIB libspdk_util.a 00:04:28.114 SO libspdk_util.so.10.0 00:04:28.114 SYMLINK libspdk_util.so 00:04:28.114 CC lib/rdma_provider/common.o 00:04:28.114 CC lib/env_dpdk/env.o 00:04:28.114 CC lib/conf/conf.o 00:04:28.114 CC lib/idxd/idxd.o 00:04:28.114 CC lib/vmd/vmd.o 00:04:28.114 CC lib/rdma_utils/rdma_utils.o 00:04:28.114 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:28.114 CC lib/env_dpdk/memory.o 00:04:28.114 CC lib/json/json_parse.o 00:04:28.114 CC lib/idxd/idxd_user.o 00:04:28.114 CC lib/json/json_util.o 00:04:28.114 CC lib/env_dpdk/pci.o 00:04:28.114 CC lib/vmd/led.o 00:04:28.114 CC lib/json/json_write.o 00:04:28.114 CC lib/idxd/idxd_kernel.o 00:04:28.114 CC lib/env_dpdk/init.o 00:04:28.114 CC lib/env_dpdk/threads.o 00:04:28.114 CC lib/env_dpdk/pci_ioat.o 00:04:28.114 CC lib/env_dpdk/pci_virtio.o 00:04:28.114 CC lib/env_dpdk/pci_vmd.o 00:04:28.114 CC lib/env_dpdk/pci_idxd.o 00:04:28.114 CC lib/env_dpdk/pci_event.o 00:04:28.114 CC lib/env_dpdk/sigbus_handler.o 00:04:28.114 CC lib/env_dpdk/pci_dpdk.o 00:04:28.114 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:28.114 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:28.114 LIB libspdk_trace_parser.a 00:04:28.114 SO libspdk_trace_parser.so.6.0 00:04:28.114 SYMLINK libspdk_trace_parser.so 00:04:28.114 LIB libspdk_conf.a 00:04:28.114 SO libspdk_conf.so.6.0 00:04:28.114 LIB libspdk_rdma_utils.a 00:04:28.114 LIB libspdk_rdma_provider.a 00:04:28.114 LIB libspdk_json.a 00:04:28.114 SO libspdk_rdma_utils.so.1.0 00:04:28.114 SO libspdk_rdma_provider.so.6.0 00:04:28.114 SYMLINK libspdk_conf.so 00:04:28.114 SO libspdk_json.so.6.0 00:04:28.114 SYMLINK libspdk_rdma_utils.so 00:04:28.114 SYMLINK libspdk_rdma_provider.so 00:04:28.114 SYMLINK libspdk_json.so 00:04:28.114 CC lib/jsonrpc/jsonrpc_server.o 00:04:28.114 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:28.114 CC lib/jsonrpc/jsonrpc_client.o 00:04:28.114 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:28.114 LIB libspdk_idxd.a 00:04:28.114 SO libspdk_idxd.so.12.1 00:04:28.114 SYMLINK libspdk_idxd.so 00:04:28.114 LIB libspdk_vmd.a 00:04:28.114 SO libspdk_vmd.so.6.0 00:04:28.114 LIB libspdk_jsonrpc.a 00:04:28.115 SYMLINK libspdk_vmd.so 00:04:28.115 SO libspdk_jsonrpc.so.6.0 00:04:28.115 SYMLINK libspdk_jsonrpc.so 00:04:28.115 CC lib/rpc/rpc.o 00:04:28.115 LIB libspdk_rpc.a 00:04:28.115 SO libspdk_rpc.so.6.0 00:04:28.115 SYMLINK libspdk_rpc.so 00:04:28.115 CC lib/notify/notify.o 00:04:28.115 CC lib/trace/trace.o 00:04:28.115 CC lib/notify/notify_rpc.o 00:04:28.115 CC lib/trace/trace_flags.o 00:04:28.115 CC lib/keyring/keyring.o 00:04:28.115 CC lib/trace/trace_rpc.o 00:04:28.115 CC lib/keyring/keyring_rpc.o 00:04:28.115 LIB libspdk_notify.a 00:04:28.115 SO libspdk_notify.so.6.0 00:04:28.372 LIB libspdk_keyring.a 00:04:28.373 SYMLINK libspdk_notify.so 00:04:28.373 LIB libspdk_trace.a 00:04:28.373 SO libspdk_keyring.so.2.0 00:04:28.373 SO libspdk_trace.so.11.0 00:04:28.373 SYMLINK libspdk_keyring.so 00:04:28.373 SYMLINK libspdk_trace.so 00:04:28.631 CC lib/thread/thread.o 00:04:28.631 CC lib/thread/iobuf.o 00:04:28.631 CC lib/sock/sock.o 00:04:28.631 CC lib/sock/sock_rpc.o 00:04:28.631 LIB libspdk_env_dpdk.a 00:04:28.631 SO libspdk_env_dpdk.so.15.0 00:04:28.631 SYMLINK libspdk_env_dpdk.so 00:04:28.890 LIB libspdk_sock.a 00:04:28.890 SO libspdk_sock.so.10.0 00:04:28.890 SYMLINK libspdk_sock.so 00:04:29.148 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.148 CC lib/nvme/nvme_ctrlr.o 00:04:29.148 CC lib/nvme/nvme_fabric.o 00:04:29.148 CC lib/nvme/nvme_ns_cmd.o 00:04:29.148 CC lib/nvme/nvme_ns.o 00:04:29.148 CC lib/nvme/nvme_pcie_common.o 00:04:29.148 CC lib/nvme/nvme_pcie.o 00:04:29.148 CC lib/nvme/nvme_qpair.o 00:04:29.148 CC lib/nvme/nvme.o 00:04:29.148 CC lib/nvme/nvme_quirks.o 00:04:29.148 CC lib/nvme/nvme_transport.o 00:04:29.148 CC lib/nvme/nvme_discovery.o 00:04:29.148 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.148 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.148 CC lib/nvme/nvme_tcp.o 00:04:29.148 CC lib/nvme/nvme_opal.o 00:04:29.148 CC lib/nvme/nvme_io_msg.o 00:04:29.148 CC lib/nvme/nvme_poll_group.o 00:04:29.148 CC lib/nvme/nvme_zns.o 00:04:29.148 CC lib/nvme/nvme_stubs.o 00:04:29.148 CC lib/nvme/nvme_auth.o 00:04:29.148 CC lib/nvme/nvme_cuse.o 00:04:29.148 CC lib/nvme/nvme_vfio_user.o 00:04:29.148 CC lib/nvme/nvme_rdma.o 00:04:30.083 LIB libspdk_thread.a 00:04:30.083 SO libspdk_thread.so.10.1 00:04:30.342 SYMLINK libspdk_thread.so 00:04:30.342 CC lib/fsdev/fsdev.o 00:04:30.342 CC lib/virtio/virtio.o 00:04:30.342 CC lib/blob/blobstore.o 00:04:30.342 CC lib/init/json_config.o 00:04:30.342 CC lib/blob/request.o 00:04:30.342 CC lib/virtio/virtio_vhost_user.o 00:04:30.342 CC lib/fsdev/fsdev_io.o 00:04:30.342 CC lib/init/subsystem.o 00:04:30.342 CC lib/fsdev/fsdev_rpc.o 00:04:30.342 CC lib/blob/zeroes.o 00:04:30.342 CC lib/virtio/virtio_vfio_user.o 00:04:30.342 CC lib/init/subsystem_rpc.o 00:04:30.342 CC lib/blob/blob_bs_dev.o 00:04:30.342 CC lib/virtio/virtio_pci.o 00:04:30.342 CC lib/init/rpc.o 00:04:30.342 CC lib/vfu_tgt/tgt_endpoint.o 00:04:30.342 CC lib/vfu_tgt/tgt_rpc.o 00:04:30.342 CC lib/accel/accel.o 00:04:30.342 CC lib/accel/accel_rpc.o 00:04:30.342 CC lib/accel/accel_sw.o 00:04:30.601 LIB libspdk_init.a 00:04:30.859 SO libspdk_init.so.6.0 00:04:30.859 LIB libspdk_virtio.a 00:04:30.859 SYMLINK libspdk_init.so 00:04:30.859 LIB libspdk_vfu_tgt.a 00:04:30.859 SO libspdk_vfu_tgt.so.3.0 00:04:30.859 SO libspdk_virtio.so.7.0 00:04:30.859 SYMLINK libspdk_vfu_tgt.so 00:04:30.859 SYMLINK libspdk_virtio.so 00:04:30.859 CC lib/event/app.o 00:04:30.859 CC lib/event/reactor.o 00:04:30.859 CC lib/event/log_rpc.o 00:04:30.859 CC lib/event/app_rpc.o 00:04:30.859 CC lib/event/scheduler_static.o 00:04:31.117 LIB libspdk_fsdev.a 00:04:31.117 SO libspdk_fsdev.so.1.0 00:04:31.117 SYMLINK libspdk_fsdev.so 00:04:31.376 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:31.376 LIB libspdk_event.a 00:04:31.376 SO libspdk_event.so.14.0 00:04:31.376 SYMLINK libspdk_event.so 00:04:31.634 LIB libspdk_accel.a 00:04:31.634 SO libspdk_accel.so.16.0 00:04:31.634 LIB libspdk_nvme.a 00:04:31.634 SYMLINK libspdk_accel.so 00:04:31.892 SO libspdk_nvme.so.14.0 00:04:31.892 CC lib/bdev/bdev.o 00:04:31.892 CC lib/bdev/bdev_rpc.o 00:04:31.892 CC lib/bdev/bdev_zone.o 00:04:31.892 CC lib/bdev/part.o 00:04:31.892 CC lib/bdev/scsi_nvme.o 00:04:31.892 SYMLINK libspdk_nvme.so 00:04:32.151 LIB libspdk_fuse_dispatcher.a 00:04:32.151 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.151 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.525 LIB libspdk_blob.a 00:04:33.783 SO libspdk_blob.so.11.0 00:04:33.783 SYMLINK libspdk_blob.so 00:04:33.783 CC lib/blobfs/blobfs.o 00:04:33.783 CC lib/blobfs/tree.o 00:04:33.783 CC lib/lvol/lvol.o 00:04:34.805 LIB libspdk_bdev.a 00:04:34.805 SO libspdk_bdev.so.16.0 00:04:34.805 SYMLINK libspdk_bdev.so 00:04:34.805 LIB libspdk_blobfs.a 00:04:34.805 SO libspdk_blobfs.so.10.0 00:04:34.805 SYMLINK libspdk_blobfs.so 00:04:34.805 CC lib/nbd/nbd.o 00:04:34.805 CC lib/nvmf/ctrlr.o 00:04:34.805 CC lib/nbd/nbd_rpc.o 00:04:34.805 CC lib/ftl/ftl_core.o 00:04:34.806 CC lib/ftl/ftl_init.o 00:04:34.806 CC lib/nvmf/ctrlr_discovery.o 00:04:34.806 CC lib/nvmf/ctrlr_bdev.o 00:04:34.806 CC lib/ftl/ftl_layout.o 00:04:34.806 CC lib/nvmf/subsystem.o 00:04:34.806 CC lib/ublk/ublk.o 00:04:34.806 CC lib/ftl/ftl_debug.o 00:04:34.806 CC lib/nvmf/nvmf.o 00:04:34.806 CC lib/ftl/ftl_io.o 00:04:34.806 CC lib/scsi/dev.o 00:04:34.806 CC lib/ublk/ublk_rpc.o 00:04:34.806 CC lib/nvmf/nvmf_rpc.o 00:04:34.806 CC lib/ftl/ftl_sb.o 00:04:34.806 CC lib/nvmf/transport.o 00:04:34.806 CC lib/scsi/lun.o 00:04:34.806 CC lib/scsi/port.o 00:04:34.806 CC lib/ftl/ftl_l2p.o 00:04:34.806 CC lib/nvmf/tcp.o 00:04:34.806 CC lib/ftl/ftl_l2p_flat.o 00:04:34.806 CC lib/ftl/ftl_nv_cache.o 00:04:34.806 CC lib/scsi/scsi.o 00:04:34.806 CC lib/nvmf/stubs.o 00:04:34.806 CC lib/nvmf/mdns_server.o 00:04:34.806 CC lib/scsi/scsi_bdev.o 00:04:34.806 CC lib/ftl/ftl_band.o 00:04:34.806 CC lib/scsi/scsi_pr.o 00:04:34.806 CC lib/nvmf/vfio_user.o 00:04:34.806 CC lib/ftl/ftl_band_ops.o 00:04:34.806 CC lib/scsi/scsi_rpc.o 00:04:34.806 CC lib/nvmf/rdma.o 00:04:34.806 CC lib/ftl/ftl_writer.o 00:04:34.806 CC lib/scsi/task.o 00:04:34.806 CC lib/nvmf/auth.o 00:04:34.806 CC lib/ftl/ftl_rq.o 00:04:34.806 CC lib/ftl/ftl_reloc.o 00:04:34.806 CC lib/ftl/ftl_l2p_cache.o 00:04:34.806 CC lib/ftl/ftl_p2l.o 00:04:34.806 CC lib/ftl/ftl_p2l_log.o 00:04:34.806 CC lib/ftl/mngt/ftl_mngt.o 00:04:34.806 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:34.806 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:34.806 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:34.806 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.065 LIB libspdk_lvol.a 00:04:35.065 SO libspdk_lvol.so.10.0 00:04:35.327 SYMLINK libspdk_lvol.so 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:35.327 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:35.327 CC lib/ftl/utils/ftl_conf.o 00:04:35.327 CC lib/ftl/utils/ftl_md.o 00:04:35.327 CC lib/ftl/utils/ftl_mempool.o 00:04:35.327 CC lib/ftl/utils/ftl_bitmap.o 00:04:35.327 CC lib/ftl/utils/ftl_property.o 00:04:35.327 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:35.327 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:35.327 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:35.327 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:35.327 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:35.327 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:35.586 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:35.586 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:35.586 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:35.586 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:35.586 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:35.586 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:35.586 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:35.586 CC lib/ftl/base/ftl_base_dev.o 00:04:35.586 CC lib/ftl/base/ftl_base_bdev.o 00:04:35.586 CC lib/ftl/ftl_trace.o 00:04:35.586 LIB libspdk_nbd.a 00:04:35.845 SO libspdk_nbd.so.7.0 00:04:35.845 SYMLINK libspdk_nbd.so 00:04:35.845 LIB libspdk_scsi.a 00:04:35.845 SO libspdk_scsi.so.9.0 00:04:35.845 SYMLINK libspdk_scsi.so 00:04:36.104 LIB libspdk_ublk.a 00:04:36.104 SO libspdk_ublk.so.3.0 00:04:36.104 SYMLINK libspdk_ublk.so 00:04:36.104 CC lib/iscsi/conn.o 00:04:36.104 CC lib/vhost/vhost.o 00:04:36.104 CC lib/vhost/vhost_rpc.o 00:04:36.104 CC lib/iscsi/init_grp.o 00:04:36.104 CC lib/vhost/vhost_scsi.o 00:04:36.104 CC lib/vhost/vhost_blk.o 00:04:36.104 CC lib/iscsi/iscsi.o 00:04:36.104 CC lib/iscsi/param.o 00:04:36.104 CC lib/vhost/rte_vhost_user.o 00:04:36.104 CC lib/iscsi/portal_grp.o 00:04:36.104 CC lib/iscsi/tgt_node.o 00:04:36.104 CC lib/iscsi/iscsi_subsystem.o 00:04:36.104 CC lib/iscsi/iscsi_rpc.o 00:04:36.104 CC lib/iscsi/task.o 00:04:36.362 LIB libspdk_ftl.a 00:04:36.620 SO libspdk_ftl.so.9.0 00:04:36.878 SYMLINK libspdk_ftl.so 00:04:37.445 LIB libspdk_vhost.a 00:04:37.445 SO libspdk_vhost.so.8.0 00:04:37.445 SYMLINK libspdk_vhost.so 00:04:37.445 LIB libspdk_nvmf.a 00:04:37.703 SO libspdk_nvmf.so.19.0 00:04:37.703 LIB libspdk_iscsi.a 00:04:37.703 SO libspdk_iscsi.so.8.0 00:04:37.703 SYMLINK libspdk_nvmf.so 00:04:37.703 SYMLINK libspdk_iscsi.so 00:04:37.962 CC module/vfu_device/vfu_virtio.o 00:04:37.962 CC module/vfu_device/vfu_virtio_blk.o 00:04:37.962 CC module/vfu_device/vfu_virtio_scsi.o 00:04:37.962 CC module/vfu_device/vfu_virtio_rpc.o 00:04:37.962 CC module/env_dpdk/env_dpdk_rpc.o 00:04:37.962 CC module/vfu_device/vfu_virtio_fs.o 00:04:38.220 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.220 CC module/keyring/linux/keyring.o 00:04:38.220 CC module/accel/ioat/accel_ioat.o 00:04:38.220 CC module/accel/iaa/accel_iaa.o 00:04:38.220 CC module/keyring/linux/keyring_rpc.o 00:04:38.220 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.220 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.220 CC module/scheduler/gscheduler/gscheduler.o 00:04:38.220 CC module/blob/bdev/blob_bdev.o 00:04:38.220 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.220 CC module/keyring/file/keyring.o 00:04:38.220 CC module/sock/posix/posix.o 00:04:38.220 CC module/keyring/file/keyring_rpc.o 00:04:38.220 CC module/accel/dsa/accel_dsa.o 00:04:38.220 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.220 CC module/fsdev/aio/fsdev_aio.o 00:04:38.220 CC module/accel/error/accel_error.o 00:04:38.220 CC module/accel/error/accel_error_rpc.o 00:04:38.220 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:38.220 CC module/fsdev/aio/linux_aio_mgr.o 00:04:38.220 LIB libspdk_env_dpdk_rpc.a 00:04:38.220 SO libspdk_env_dpdk_rpc.so.6.0 00:04:38.220 LIB libspdk_keyring_linux.a 00:04:38.220 LIB libspdk_keyring_file.a 00:04:38.220 LIB libspdk_scheduler_gscheduler.a 00:04:38.479 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.479 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.479 SO libspdk_keyring_linux.so.1.0 00:04:38.479 SO libspdk_scheduler_gscheduler.so.4.0 00:04:38.479 SO libspdk_keyring_file.so.2.0 00:04:38.479 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:38.479 LIB libspdk_accel_error.a 00:04:38.479 LIB libspdk_scheduler_dynamic.a 00:04:38.479 LIB libspdk_accel_iaa.a 00:04:38.479 SO libspdk_accel_error.so.2.0 00:04:38.479 LIB libspdk_accel_ioat.a 00:04:38.479 SYMLINK libspdk_keyring_linux.so 00:04:38.479 SYMLINK libspdk_scheduler_gscheduler.so 00:04:38.479 SO libspdk_scheduler_dynamic.so.4.0 00:04:38.479 SYMLINK libspdk_keyring_file.so 00:04:38.479 SO libspdk_accel_iaa.so.3.0 00:04:38.479 SO libspdk_accel_ioat.so.6.0 00:04:38.479 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.479 SYMLINK libspdk_accel_error.so 00:04:38.479 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.479 SYMLINK libspdk_accel_iaa.so 00:04:38.479 LIB libspdk_blob_bdev.a 00:04:38.479 SYMLINK libspdk_accel_ioat.so 00:04:38.479 LIB libspdk_accel_dsa.a 00:04:38.479 SO libspdk_blob_bdev.so.11.0 00:04:38.479 SO libspdk_accel_dsa.so.5.0 00:04:38.479 SYMLINK libspdk_blob_bdev.so 00:04:38.479 SYMLINK libspdk_accel_dsa.so 00:04:38.742 LIB libspdk_vfu_device.a 00:04:38.742 SO libspdk_vfu_device.so.3.0 00:04:38.742 CC module/bdev/lvol/vbdev_lvol.o 00:04:38.742 CC module/bdev/gpt/gpt.o 00:04:38.742 CC module/bdev/passthru/vbdev_passthru.o 00:04:38.742 CC module/bdev/gpt/vbdev_gpt.o 00:04:38.742 CC module/bdev/null/bdev_null.o 00:04:38.742 CC module/bdev/nvme/bdev_nvme.o 00:04:38.742 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:38.742 CC module/bdev/error/vbdev_error.o 00:04:38.742 CC module/bdev/null/bdev_null_rpc.o 00:04:38.742 CC module/bdev/malloc/bdev_malloc.o 00:04:38.742 CC module/bdev/error/vbdev_error_rpc.o 00:04:38.742 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:38.742 CC module/bdev/delay/vbdev_delay.o 00:04:38.742 CC module/blobfs/bdev/blobfs_bdev.o 00:04:38.742 CC module/bdev/split/vbdev_split.o 00:04:38.742 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:38.742 CC module/bdev/ftl/bdev_ftl.o 00:04:38.742 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:38.742 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:38.742 CC module/bdev/split/vbdev_split_rpc.o 00:04:38.742 CC module/bdev/nvme/nvme_rpc.o 00:04:38.742 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:38.742 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:38.742 CC module/bdev/aio/bdev_aio.o 00:04:38.742 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:38.742 CC module/bdev/iscsi/bdev_iscsi.o 00:04:38.742 CC module/bdev/aio/bdev_aio_rpc.o 00:04:38.742 CC module/bdev/raid/bdev_raid.o 00:04:38.742 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:38.742 CC module/bdev/nvme/bdev_mdns_client.o 00:04:38.742 CC module/bdev/nvme/vbdev_opal.o 00:04:38.742 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:38.742 CC module/bdev/raid/bdev_raid_rpc.o 00:04:38.742 CC module/bdev/raid/bdev_raid_sb.o 00:04:38.742 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:38.742 CC module/bdev/raid/raid0.o 00:04:38.742 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:38.742 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:38.742 CC module/bdev/raid/raid1.o 00:04:38.742 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:38.742 CC module/bdev/raid/concat.o 00:04:38.742 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:39.002 LIB libspdk_fsdev_aio.a 00:04:39.002 SYMLINK libspdk_vfu_device.so 00:04:39.002 SO libspdk_fsdev_aio.so.1.0 00:04:39.002 SYMLINK libspdk_fsdev_aio.so 00:04:39.260 LIB libspdk_sock_posix.a 00:04:39.260 SO libspdk_sock_posix.so.6.0 00:04:39.260 LIB libspdk_blobfs_bdev.a 00:04:39.260 SO libspdk_blobfs_bdev.so.6.0 00:04:39.260 SYMLINK libspdk_sock_posix.so 00:04:39.260 LIB libspdk_bdev_split.a 00:04:39.260 SYMLINK libspdk_blobfs_bdev.so 00:04:39.260 LIB libspdk_bdev_error.a 00:04:39.260 SO libspdk_bdev_split.so.6.0 00:04:39.260 LIB libspdk_bdev_gpt.a 00:04:39.260 SO libspdk_bdev_error.so.6.0 00:04:39.260 LIB libspdk_bdev_null.a 00:04:39.260 LIB libspdk_bdev_passthru.a 00:04:39.260 LIB libspdk_bdev_ftl.a 00:04:39.260 SO libspdk_bdev_gpt.so.6.0 00:04:39.260 SO libspdk_bdev_passthru.so.6.0 00:04:39.260 SO libspdk_bdev_null.so.6.0 00:04:39.260 SO libspdk_bdev_ftl.so.6.0 00:04:39.260 SYMLINK libspdk_bdev_split.so 00:04:39.260 LIB libspdk_bdev_aio.a 00:04:39.260 SYMLINK libspdk_bdev_error.so 00:04:39.518 LIB libspdk_bdev_malloc.a 00:04:39.518 SO libspdk_bdev_aio.so.6.0 00:04:39.518 SYMLINK libspdk_bdev_gpt.so 00:04:39.518 SYMLINK libspdk_bdev_passthru.so 00:04:39.518 SYMLINK libspdk_bdev_null.so 00:04:39.518 LIB libspdk_bdev_zone_block.a 00:04:39.518 SYMLINK libspdk_bdev_ftl.so 00:04:39.518 SO libspdk_bdev_malloc.so.6.0 00:04:39.518 LIB libspdk_bdev_iscsi.a 00:04:39.518 SO libspdk_bdev_zone_block.so.6.0 00:04:39.518 LIB libspdk_bdev_delay.a 00:04:39.518 SO libspdk_bdev_iscsi.so.6.0 00:04:39.518 SYMLINK libspdk_bdev_aio.so 00:04:39.518 SO libspdk_bdev_delay.so.6.0 00:04:39.518 SYMLINK libspdk_bdev_malloc.so 00:04:39.518 SYMLINK libspdk_bdev_zone_block.so 00:04:39.518 SYMLINK libspdk_bdev_iscsi.so 00:04:39.518 SYMLINK libspdk_bdev_delay.so 00:04:39.518 LIB libspdk_bdev_lvol.a 00:04:39.518 LIB libspdk_bdev_virtio.a 00:04:39.518 SO libspdk_bdev_lvol.so.6.0 00:04:39.518 SO libspdk_bdev_virtio.so.6.0 00:04:39.776 SYMLINK libspdk_bdev_lvol.so 00:04:39.776 SYMLINK libspdk_bdev_virtio.so 00:04:40.034 LIB libspdk_bdev_raid.a 00:04:40.034 SO libspdk_bdev_raid.so.6.0 00:04:40.034 SYMLINK libspdk_bdev_raid.so 00:04:41.409 LIB libspdk_bdev_nvme.a 00:04:41.409 SO libspdk_bdev_nvme.so.7.0 00:04:41.409 SYMLINK libspdk_bdev_nvme.so 00:04:41.975 CC module/event/subsystems/vmd/vmd.o 00:04:41.975 CC module/event/subsystems/iobuf/iobuf.o 00:04:41.975 CC module/event/subsystems/sock/sock.o 00:04:41.975 CC module/event/subsystems/scheduler/scheduler.o 00:04:41.975 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:41.975 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:41.975 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:41.975 CC module/event/subsystems/fsdev/fsdev.o 00:04:41.975 CC module/event/subsystems/keyring/keyring.o 00:04:41.975 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:41.975 LIB libspdk_event_keyring.a 00:04:41.975 LIB libspdk_event_vhost_blk.a 00:04:41.975 LIB libspdk_event_fsdev.a 00:04:41.975 LIB libspdk_event_vfu_tgt.a 00:04:41.975 LIB libspdk_event_vmd.a 00:04:41.975 LIB libspdk_event_scheduler.a 00:04:41.975 LIB libspdk_event_sock.a 00:04:41.975 SO libspdk_event_keyring.so.1.0 00:04:41.975 SO libspdk_event_vhost_blk.so.3.0 00:04:41.975 SO libspdk_event_fsdev.so.1.0 00:04:41.975 SO libspdk_event_vfu_tgt.so.3.0 00:04:41.975 LIB libspdk_event_iobuf.a 00:04:41.975 SO libspdk_event_scheduler.so.4.0 00:04:41.975 SO libspdk_event_vmd.so.6.0 00:04:41.975 SO libspdk_event_sock.so.5.0 00:04:41.975 SO libspdk_event_iobuf.so.3.0 00:04:41.975 SYMLINK libspdk_event_keyring.so 00:04:41.975 SYMLINK libspdk_event_vhost_blk.so 00:04:41.975 SYMLINK libspdk_event_fsdev.so 00:04:41.975 SYMLINK libspdk_event_vfu_tgt.so 00:04:41.975 SYMLINK libspdk_event_scheduler.so 00:04:41.975 SYMLINK libspdk_event_sock.so 00:04:41.975 SYMLINK libspdk_event_vmd.so 00:04:42.234 SYMLINK libspdk_event_iobuf.so 00:04:42.234 CC module/event/subsystems/accel/accel.o 00:04:42.492 LIB libspdk_event_accel.a 00:04:42.493 SO libspdk_event_accel.so.6.0 00:04:42.493 SYMLINK libspdk_event_accel.so 00:04:42.751 CC module/event/subsystems/bdev/bdev.o 00:04:42.751 LIB libspdk_event_bdev.a 00:04:43.010 SO libspdk_event_bdev.so.6.0 00:04:43.010 SYMLINK libspdk_event_bdev.so 00:04:43.010 CC module/event/subsystems/nbd/nbd.o 00:04:43.010 CC module/event/subsystems/ublk/ublk.o 00:04:43.010 CC module/event/subsystems/scsi/scsi.o 00:04:43.010 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.010 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.268 LIB libspdk_event_nbd.a 00:04:43.268 LIB libspdk_event_ublk.a 00:04:43.268 LIB libspdk_event_scsi.a 00:04:43.268 SO libspdk_event_nbd.so.6.0 00:04:43.268 SO libspdk_event_ublk.so.3.0 00:04:43.268 SO libspdk_event_scsi.so.6.0 00:04:43.268 SYMLINK libspdk_event_ublk.so 00:04:43.268 SYMLINK libspdk_event_nbd.so 00:04:43.268 SYMLINK libspdk_event_scsi.so 00:04:43.268 LIB libspdk_event_nvmf.a 00:04:43.268 SO libspdk_event_nvmf.so.6.0 00:04:43.526 SYMLINK libspdk_event_nvmf.so 00:04:43.526 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:43.526 CC module/event/subsystems/iscsi/iscsi.o 00:04:43.526 LIB libspdk_event_vhost_scsi.a 00:04:43.785 SO libspdk_event_vhost_scsi.so.3.0 00:04:43.785 LIB libspdk_event_iscsi.a 00:04:43.785 SO libspdk_event_iscsi.so.6.0 00:04:43.785 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.785 SYMLINK libspdk_event_iscsi.so 00:04:43.785 SO libspdk.so.6.0 00:04:43.785 SYMLINK libspdk.so 00:04:44.051 CC app/spdk_top/spdk_top.o 00:04:44.051 CC app/spdk_lspci/spdk_lspci.o 00:04:44.051 CC app/trace_record/trace_record.o 00:04:44.051 CXX app/trace/trace.o 00:04:44.051 CC app/spdk_nvme_perf/perf.o 00:04:44.051 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.051 TEST_HEADER include/spdk/accel.h 00:04:44.051 TEST_HEADER include/spdk/accel_module.h 00:04:44.051 TEST_HEADER include/spdk/assert.h 00:04:44.051 TEST_HEADER include/spdk/barrier.h 00:04:44.051 CC app/spdk_nvme_identify/identify.o 00:04:44.051 TEST_HEADER include/spdk/base64.h 00:04:44.051 CC test/rpc_client/rpc_client_test.o 00:04:44.051 TEST_HEADER include/spdk/bdev.h 00:04:44.051 TEST_HEADER include/spdk/bdev_module.h 00:04:44.051 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.051 TEST_HEADER include/spdk/bit_array.h 00:04:44.051 TEST_HEADER include/spdk/bit_pool.h 00:04:44.051 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.051 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.051 TEST_HEADER include/spdk/blobfs.h 00:04:44.051 TEST_HEADER include/spdk/blob.h 00:04:44.051 TEST_HEADER include/spdk/conf.h 00:04:44.051 TEST_HEADER include/spdk/config.h 00:04:44.051 TEST_HEADER include/spdk/cpuset.h 00:04:44.051 TEST_HEADER include/spdk/crc16.h 00:04:44.051 TEST_HEADER include/spdk/crc64.h 00:04:44.051 TEST_HEADER include/spdk/crc32.h 00:04:44.051 TEST_HEADER include/spdk/dif.h 00:04:44.051 TEST_HEADER include/spdk/dma.h 00:04:44.051 TEST_HEADER include/spdk/endian.h 00:04:44.051 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.051 TEST_HEADER include/spdk/env.h 00:04:44.051 TEST_HEADER include/spdk/event.h 00:04:44.051 TEST_HEADER include/spdk/fd.h 00:04:44.051 TEST_HEADER include/spdk/fd_group.h 00:04:44.051 TEST_HEADER include/spdk/file.h 00:04:44.051 TEST_HEADER include/spdk/fsdev.h 00:04:44.051 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.051 TEST_HEADER include/spdk/ftl.h 00:04:44.051 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.051 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.051 TEST_HEADER include/spdk/hexlify.h 00:04:44.051 TEST_HEADER include/spdk/histogram_data.h 00:04:44.051 TEST_HEADER include/spdk/idxd.h 00:04:44.051 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.051 TEST_HEADER include/spdk/init.h 00:04:44.051 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.051 TEST_HEADER include/spdk/ioat.h 00:04:44.051 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.051 TEST_HEADER include/spdk/json.h 00:04:44.051 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.051 TEST_HEADER include/spdk/keyring.h 00:04:44.051 TEST_HEADER include/spdk/keyring_module.h 00:04:44.051 TEST_HEADER include/spdk/likely.h 00:04:44.051 TEST_HEADER include/spdk/log.h 00:04:44.051 TEST_HEADER include/spdk/lvol.h 00:04:44.051 TEST_HEADER include/spdk/md5.h 00:04:44.051 TEST_HEADER include/spdk/memory.h 00:04:44.051 TEST_HEADER include/spdk/mmio.h 00:04:44.051 TEST_HEADER include/spdk/nbd.h 00:04:44.051 TEST_HEADER include/spdk/net.h 00:04:44.051 TEST_HEADER include/spdk/notify.h 00:04:44.051 TEST_HEADER include/spdk/nvme.h 00:04:44.051 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.051 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.051 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.051 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.051 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.051 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.051 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.051 TEST_HEADER include/spdk/nvmf.h 00:04:44.051 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.051 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.051 TEST_HEADER include/spdk/opal.h 00:04:44.051 TEST_HEADER include/spdk/opal_spec.h 00:04:44.051 TEST_HEADER include/spdk/pci_ids.h 00:04:44.051 TEST_HEADER include/spdk/pipe.h 00:04:44.051 TEST_HEADER include/spdk/reduce.h 00:04:44.051 TEST_HEADER include/spdk/queue.h 00:04:44.051 TEST_HEADER include/spdk/rpc.h 00:04:44.051 TEST_HEADER include/spdk/scsi.h 00:04:44.051 TEST_HEADER include/spdk/scheduler.h 00:04:44.051 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.051 TEST_HEADER include/spdk/sock.h 00:04:44.051 TEST_HEADER include/spdk/stdinc.h 00:04:44.051 TEST_HEADER include/spdk/string.h 00:04:44.051 TEST_HEADER include/spdk/thread.h 00:04:44.051 TEST_HEADER include/spdk/trace.h 00:04:44.051 TEST_HEADER include/spdk/tree.h 00:04:44.051 TEST_HEADER include/spdk/trace_parser.h 00:04:44.051 TEST_HEADER include/spdk/ublk.h 00:04:44.051 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.051 TEST_HEADER include/spdk/util.h 00:04:44.051 TEST_HEADER include/spdk/version.h 00:04:44.051 TEST_HEADER include/spdk/uuid.h 00:04:44.051 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.051 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.052 TEST_HEADER include/spdk/vhost.h 00:04:44.052 TEST_HEADER include/spdk/vmd.h 00:04:44.052 TEST_HEADER include/spdk/xor.h 00:04:44.052 TEST_HEADER include/spdk/zipf.h 00:04:44.052 CXX test/cpp_headers/accel.o 00:04:44.052 CXX test/cpp_headers/accel_module.o 00:04:44.052 CXX test/cpp_headers/assert.o 00:04:44.052 CXX test/cpp_headers/barrier.o 00:04:44.052 CXX test/cpp_headers/base64.o 00:04:44.052 CXX test/cpp_headers/bdev.o 00:04:44.052 CXX test/cpp_headers/bdev_module.o 00:04:44.052 CXX test/cpp_headers/bdev_zone.o 00:04:44.052 CXX test/cpp_headers/bit_array.o 00:04:44.052 CXX test/cpp_headers/bit_pool.o 00:04:44.052 CXX test/cpp_headers/blob_bdev.o 00:04:44.052 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.052 CC app/spdk_dd/spdk_dd.o 00:04:44.052 CXX test/cpp_headers/blobfs.o 00:04:44.052 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.052 CXX test/cpp_headers/blob.o 00:04:44.052 CXX test/cpp_headers/conf.o 00:04:44.052 CXX test/cpp_headers/config.o 00:04:44.052 CXX test/cpp_headers/cpuset.o 00:04:44.052 CXX test/cpp_headers/crc16.o 00:04:44.052 CC app/nvmf_tgt/nvmf_main.o 00:04:44.312 CC app/spdk_tgt/spdk_tgt.o 00:04:44.312 CXX test/cpp_headers/crc32.o 00:04:44.312 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.312 CC test/app/histogram_perf/histogram_perf.o 00:04:44.312 CC test/thread/poller_perf/poller_perf.o 00:04:44.312 CC test/app/jsoncat/jsoncat.o 00:04:44.312 CC examples/ioat/verify/verify.o 00:04:44.312 CC examples/util/zipf/zipf.o 00:04:44.312 CC app/fio/nvme/fio_plugin.o 00:04:44.312 CC test/env/memory/memory_ut.o 00:04:44.312 CC examples/ioat/perf/perf.o 00:04:44.312 CC test/env/vtophys/vtophys.o 00:04:44.312 CC test/app/stub/stub.o 00:04:44.312 CC test/env/pci/pci_ut.o 00:04:44.312 CC test/app/bdev_svc/bdev_svc.o 00:04:44.312 CC app/fio/bdev/fio_plugin.o 00:04:44.312 CC test/dma/test_dma/test_dma.o 00:04:44.312 LINK spdk_lspci 00:04:44.312 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.312 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.575 LINK rpc_client_test 00:04:44.575 LINK spdk_nvme_discover 00:04:44.575 LINK poller_perf 00:04:44.575 CXX test/cpp_headers/crc64.o 00:04:44.575 LINK jsoncat 00:04:44.575 LINK spdk_trace_record 00:04:44.575 LINK histogram_perf 00:04:44.575 LINK env_dpdk_post_init 00:04:44.575 CXX test/cpp_headers/dif.o 00:04:44.575 LINK zipf 00:04:44.575 LINK interrupt_tgt 00:04:44.575 LINK vtophys 00:04:44.575 CXX test/cpp_headers/dma.o 00:04:44.575 CXX test/cpp_headers/endian.o 00:04:44.575 LINK nvmf_tgt 00:04:44.575 CXX test/cpp_headers/env.o 00:04:44.575 CXX test/cpp_headers/env_dpdk.o 00:04:44.575 CXX test/cpp_headers/event.o 00:04:44.575 CXX test/cpp_headers/fd_group.o 00:04:44.575 CXX test/cpp_headers/fd.o 00:04:44.575 CXX test/cpp_headers/file.o 00:04:44.575 CXX test/cpp_headers/fsdev.o 00:04:44.575 CXX test/cpp_headers/fsdev_module.o 00:04:44.575 CXX test/cpp_headers/ftl.o 00:04:44.575 LINK iscsi_tgt 00:04:44.575 LINK stub 00:04:44.575 CXX test/cpp_headers/fuse_dispatcher.o 00:04:44.575 LINK spdk_tgt 00:04:44.575 CXX test/cpp_headers/gpt_spec.o 00:04:44.575 CXX test/cpp_headers/hexlify.o 00:04:44.836 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.836 LINK ioat_perf 00:04:44.836 LINK bdev_svc 00:04:44.836 CXX test/cpp_headers/histogram_data.o 00:04:44.836 LINK verify 00:04:44.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:44.836 CXX test/cpp_headers/idxd.o 00:04:44.836 CXX test/cpp_headers/idxd_spec.o 00:04:44.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:44.836 CXX test/cpp_headers/init.o 00:04:44.836 CXX test/cpp_headers/ioat.o 00:04:44.836 CXX test/cpp_headers/ioat_spec.o 00:04:44.836 CXX test/cpp_headers/iscsi_spec.o 00:04:44.836 LINK spdk_dd 00:04:44.836 CXX test/cpp_headers/json.o 00:04:45.106 CXX test/cpp_headers/jsonrpc.o 00:04:45.106 CXX test/cpp_headers/keyring.o 00:04:45.106 CXX test/cpp_headers/keyring_module.o 00:04:45.106 LINK spdk_trace 00:04:45.106 CXX test/cpp_headers/likely.o 00:04:45.106 CXX test/cpp_headers/log.o 00:04:45.106 CXX test/cpp_headers/lvol.o 00:04:45.106 CXX test/cpp_headers/md5.o 00:04:45.106 CXX test/cpp_headers/memory.o 00:04:45.106 CXX test/cpp_headers/mmio.o 00:04:45.106 LINK pci_ut 00:04:45.106 CXX test/cpp_headers/nbd.o 00:04:45.106 CXX test/cpp_headers/net.o 00:04:45.106 CXX test/cpp_headers/notify.o 00:04:45.106 CXX test/cpp_headers/nvme.o 00:04:45.106 CXX test/cpp_headers/nvme_intel.o 00:04:45.106 CXX test/cpp_headers/nvme_ocssd.o 00:04:45.106 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.106 CXX test/cpp_headers/nvme_spec.o 00:04:45.106 CXX test/cpp_headers/nvme_zns.o 00:04:45.106 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.106 CC test/event/event_perf/event_perf.o 00:04:45.106 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.106 CC test/event/reactor/reactor.o 00:04:45.106 CC test/event/reactor_perf/reactor_perf.o 00:04:45.368 CC test/event/app_repeat/app_repeat.o 00:04:45.368 CXX test/cpp_headers/nvmf.o 00:04:45.368 CXX test/cpp_headers/nvmf_spec.o 00:04:45.368 CXX test/cpp_headers/nvmf_transport.o 00:04:45.368 LINK nvme_fuzz 00:04:45.368 CXX test/cpp_headers/opal.o 00:04:45.368 CXX test/cpp_headers/opal_spec.o 00:04:45.368 CC examples/sock/hello_world/hello_sock.o 00:04:45.368 LINK spdk_nvme 00:04:45.368 CC test/event/scheduler/scheduler.o 00:04:45.368 LINK spdk_bdev 00:04:45.368 CXX test/cpp_headers/pci_ids.o 00:04:45.368 CC examples/vmd/led/led.o 00:04:45.368 CC examples/thread/thread/thread_ex.o 00:04:45.368 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.368 LINK test_dma 00:04:45.368 CC examples/idxd/perf/perf.o 00:04:45.368 CXX test/cpp_headers/pipe.o 00:04:45.368 CXX test/cpp_headers/queue.o 00:04:45.368 CXX test/cpp_headers/reduce.o 00:04:45.368 CXX test/cpp_headers/rpc.o 00:04:45.368 CXX test/cpp_headers/scheduler.o 00:04:45.368 CXX test/cpp_headers/scsi.o 00:04:45.368 CXX test/cpp_headers/scsi_spec.o 00:04:45.368 CXX test/cpp_headers/sock.o 00:04:45.368 CXX test/cpp_headers/stdinc.o 00:04:45.628 CXX test/cpp_headers/string.o 00:04:45.628 CXX test/cpp_headers/thread.o 00:04:45.628 LINK reactor_perf 00:04:45.628 LINK event_perf 00:04:45.628 CXX test/cpp_headers/trace.o 00:04:45.628 CXX test/cpp_headers/trace_parser.o 00:04:45.628 CXX test/cpp_headers/tree.o 00:04:45.628 CXX test/cpp_headers/ublk.o 00:04:45.628 LINK reactor 00:04:45.628 CXX test/cpp_headers/util.o 00:04:45.628 CXX test/cpp_headers/uuid.o 00:04:45.628 CXX test/cpp_headers/version.o 00:04:45.628 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.628 CXX test/cpp_headers/vhost.o 00:04:45.628 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.628 CXX test/cpp_headers/vmd.o 00:04:45.628 LINK app_repeat 00:04:45.628 CC app/vhost/vhost.o 00:04:45.628 CXX test/cpp_headers/xor.o 00:04:45.628 CXX test/cpp_headers/zipf.o 00:04:45.628 LINK spdk_nvme_perf 00:04:45.628 LINK lsvmd 00:04:45.628 LINK led 00:04:45.628 LINK vhost_fuzz 00:04:45.628 LINK mem_callbacks 00:04:45.887 LINK spdk_nvme_identify 00:04:45.887 LINK scheduler 00:04:45.887 LINK hello_sock 00:04:45.887 LINK spdk_top 00:04:45.887 LINK thread 00:04:45.887 CC test/nvme/fused_ordering/fused_ordering.o 00:04:45.887 CC test/nvme/reset/reset.o 00:04:45.887 CC test/nvme/reserve/reserve.o 00:04:45.887 CC test/nvme/aer/aer.o 00:04:45.887 CC test/nvme/err_injection/err_injection.o 00:04:45.887 CC test/nvme/e2edp/nvme_dp.o 00:04:45.887 CC test/nvme/connect_stress/connect_stress.o 00:04:45.887 CC test/nvme/sgl/sgl.o 00:04:45.887 CC test/nvme/overhead/overhead.o 00:04:45.887 CC test/nvme/boot_partition/boot_partition.o 00:04:45.887 CC test/nvme/compliance/nvme_compliance.o 00:04:45.887 LINK idxd_perf 00:04:45.887 CC test/nvme/startup/startup.o 00:04:45.887 CC test/nvme/simple_copy/simple_copy.o 00:04:45.887 CC test/nvme/cuse/cuse.o 00:04:45.887 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:45.887 CC test/nvme/fdp/fdp.o 00:04:46.146 LINK vhost 00:04:46.146 CC test/accel/dif/dif.o 00:04:46.146 CC test/blobfs/mkfs/mkfs.o 00:04:46.146 CC test/lvol/esnap/esnap.o 00:04:46.146 LINK boot_partition 00:04:46.146 LINK connect_stress 00:04:46.146 LINK fused_ordering 00:04:46.146 CC examples/nvme/hello_world/hello_world.o 00:04:46.146 CC examples/nvme/hotplug/hotplug.o 00:04:46.146 CC examples/nvme/abort/abort.o 00:04:46.146 LINK doorbell_aers 00:04:46.146 CC examples/nvme/reconnect/reconnect.o 00:04:46.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.146 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.146 CC examples/nvme/arbitration/arbitration.o 00:04:46.146 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:46.405 LINK mkfs 00:04:46.405 CC examples/accel/perf/accel_perf.o 00:04:46.405 LINK simple_copy 00:04:46.405 LINK startup 00:04:46.405 LINK reset 00:04:46.405 LINK overhead 00:04:46.405 LINK err_injection 00:04:46.405 LINK sgl 00:04:46.405 LINK reserve 00:04:46.405 CC examples/blob/cli/blobcli.o 00:04:46.405 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:46.405 LINK memory_ut 00:04:46.405 CC examples/blob/hello_world/hello_blob.o 00:04:46.405 LINK nvme_dp 00:04:46.405 LINK aer 00:04:46.405 LINK nvme_compliance 00:04:46.405 LINK cmb_copy 00:04:46.664 LINK pmr_persistence 00:04:46.664 LINK fdp 00:04:46.664 LINK hotplug 00:04:46.664 LINK hello_world 00:04:46.664 LINK abort 00:04:46.664 LINK arbitration 00:04:46.664 LINK hello_fsdev 00:04:46.664 LINK hello_blob 00:04:46.922 LINK reconnect 00:04:46.922 LINK accel_perf 00:04:46.922 LINK blobcli 00:04:46.922 LINK nvme_manage 00:04:46.922 LINK dif 00:04:47.181 CC examples/bdev/hello_world/hello_bdev.o 00:04:47.181 CC examples/bdev/bdevperf/bdevperf.o 00:04:47.439 LINK iscsi_fuzz 00:04:47.439 CC test/bdev/bdevio/bdevio.o 00:04:47.439 LINK hello_bdev 00:04:47.700 LINK cuse 00:04:47.700 LINK bdevio 00:04:47.960 LINK bdevperf 00:04:48.526 CC examples/nvmf/nvmf/nvmf.o 00:04:48.784 LINK nvmf 00:04:51.313 LINK esnap 00:04:51.570 00:04:51.570 real 1m6.909s 00:04:51.570 user 9m3.379s 00:04:51.570 sys 1m58.427s 00:04:51.570 14:20:43 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:51.570 14:20:43 make -- common/autotest_common.sh@10 -- $ set +x 00:04:51.570 ************************************ 00:04:51.570 END TEST make 00:04:51.570 ************************************ 00:04:51.570 14:20:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:51.570 14:20:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:51.570 14:20:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:51.570 14:20:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.570 14:20:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:51.570 14:20:43 -- pm/common@44 -- $ pid=1139213 00:04:51.570 14:20:43 -- pm/common@50 -- $ kill -TERM 1139213 00:04:51.570 14:20:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.570 14:20:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:51.570 14:20:43 -- pm/common@44 -- $ pid=1139215 00:04:51.570 14:20:43 -- pm/common@50 -- $ kill -TERM 1139215 00:04:51.570 14:20:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.570 14:20:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:51.570 14:20:43 -- pm/common@44 -- $ pid=1139217 00:04:51.570 14:20:43 -- pm/common@50 -- $ kill -TERM 1139217 00:04:51.570 14:20:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.570 14:20:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:51.570 14:20:43 -- pm/common@44 -- $ pid=1139246 00:04:51.570 14:20:43 -- pm/common@50 -- $ sudo -E kill -TERM 1139246 00:04:51.570 14:20:43 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.570 14:20:43 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.570 14:20:43 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.829 14:20:43 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.829 14:20:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.829 14:20:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.829 14:20:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.829 14:20:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.829 14:20:43 -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.829 14:20:43 -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.829 14:20:43 -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.829 14:20:43 -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.829 14:20:43 -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.829 14:20:43 -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.829 14:20:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.830 14:20:43 -- scripts/common.sh@344 -- # case "$op" in 00:04:51.830 14:20:43 -- scripts/common.sh@345 -- # : 1 00:04:51.830 14:20:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.830 14:20:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.830 14:20:43 -- scripts/common.sh@365 -- # decimal 1 00:04:51.830 14:20:43 -- scripts/common.sh@353 -- # local d=1 00:04:51.830 14:20:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.830 14:20:43 -- scripts/common.sh@355 -- # echo 1 00:04:51.830 14:20:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.830 14:20:43 -- scripts/common.sh@366 -- # decimal 2 00:04:51.830 14:20:43 -- scripts/common.sh@353 -- # local d=2 00:04:51.830 14:20:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.830 14:20:43 -- scripts/common.sh@355 -- # echo 2 00:04:51.830 14:20:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.830 14:20:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.830 14:20:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.830 14:20:43 -- scripts/common.sh@368 -- # return 0 00:04:51.830 14:20:43 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.830 14:20:43 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.830 --rc genhtml_branch_coverage=1 00:04:51.830 --rc genhtml_function_coverage=1 00:04:51.830 --rc genhtml_legend=1 00:04:51.830 --rc geninfo_all_blocks=1 00:04:51.830 --rc geninfo_unexecuted_blocks=1 00:04:51.830 00:04:51.830 ' 00:04:51.830 14:20:43 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.830 --rc genhtml_branch_coverage=1 00:04:51.830 --rc genhtml_function_coverage=1 00:04:51.830 --rc genhtml_legend=1 00:04:51.830 --rc geninfo_all_blocks=1 00:04:51.830 --rc geninfo_unexecuted_blocks=1 00:04:51.830 00:04:51.830 ' 00:04:51.830 14:20:43 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.830 --rc genhtml_branch_coverage=1 00:04:51.830 --rc genhtml_function_coverage=1 00:04:51.830 --rc genhtml_legend=1 00:04:51.830 --rc geninfo_all_blocks=1 00:04:51.830 --rc geninfo_unexecuted_blocks=1 00:04:51.830 00:04:51.830 ' 00:04:51.830 14:20:43 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.830 --rc genhtml_branch_coverage=1 00:04:51.830 --rc genhtml_function_coverage=1 00:04:51.830 --rc genhtml_legend=1 00:04:51.830 --rc geninfo_all_blocks=1 00:04:51.830 --rc geninfo_unexecuted_blocks=1 00:04:51.830 00:04:51.830 ' 00:04:51.830 14:20:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.830 14:20:43 -- nvmf/common.sh@7 -- # uname -s 00:04:51.830 14:20:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.830 14:20:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.830 14:20:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.830 14:20:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.830 14:20:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.830 14:20:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.830 14:20:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.830 14:20:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.830 14:20:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.830 14:20:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.830 14:20:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:51.830 14:20:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:51.830 14:20:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.830 14:20:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.830 14:20:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:51.830 14:20:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.830 14:20:43 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.830 14:20:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.830 14:20:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.830 14:20:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.830 14:20:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.830 14:20:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.830 14:20:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.830 14:20:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.830 14:20:43 -- paths/export.sh@5 -- # export PATH 00:04:51.830 14:20:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.830 14:20:43 -- nvmf/common.sh@51 -- # : 0 00:04:51.830 14:20:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.830 14:20:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.830 14:20:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.830 14:20:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.830 14:20:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.830 14:20:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.830 14:20:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.830 14:20:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.830 14:20:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.830 14:20:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:51.830 14:20:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:51.830 14:20:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:51.830 14:20:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:51.830 14:20:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:51.830 14:20:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:51.830 14:20:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:51.830 14:20:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:51.830 14:20:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:51.830 14:20:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:51.830 14:20:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1220194 00:04:51.830 14:20:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:51.830 14:20:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:51.830 14:20:43 -- pm/common@17 -- # local monitor 00:04:51.830 14:20:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.830 14:20:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.830 14:20:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.830 14:20:43 -- pm/common@21 -- # date +%s 00:04:51.830 14:20:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.830 14:20:43 -- pm/common@21 -- # date +%s 00:04:51.830 14:20:43 -- pm/common@25 -- # sleep 1 00:04:51.830 14:20:43 -- pm/common@21 -- # date +%s 00:04:51.830 14:20:43 -- pm/common@21 -- # date +%s 00:04:51.830 14:20:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730553643 00:04:51.830 14:20:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730553643 00:04:51.830 14:20:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730553643 00:04:51.830 14:20:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730553643 00:04:51.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730553643_collect-vmstat.pm.log 00:04:51.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730553643_collect-cpu-load.pm.log 00:04:51.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730553643_collect-cpu-temp.pm.log 00:04:51.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730553643_collect-bmc-pm.bmc.pm.log 00:04:52.766 14:20:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:52.766 14:20:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:52.766 14:20:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.766 14:20:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.766 14:20:44 -- spdk/autotest.sh@59 -- # create_test_list 00:04:52.766 14:20:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:52.766 14:20:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.766 14:20:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:52.766 14:20:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.766 14:20:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.766 14:20:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:52.766 14:20:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.766 14:20:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:52.766 14:20:44 -- common/autotest_common.sh@1455 -- # uname 00:04:52.766 14:20:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:52.766 14:20:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:52.766 14:20:44 -- common/autotest_common.sh@1475 -- # uname 00:04:52.766 14:20:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:52.766 14:20:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:52.766 14:20:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:53.025 lcov: LCOV version 1.15 00:04:53.025 14:20:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:25.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:25.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.448 14:21:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.448 14:21:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.448 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:30.448 14:21:21 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.448 14:21:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.382 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:31.382 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:31.382 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:31.382 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:31.382 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:31.382 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:31.382 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:31.382 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:31.382 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:31.382 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:31.382 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:31.382 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:31.382 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:31.382 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:31.382 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:31.382 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:31.382 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:31.641 14:21:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.641 14:21:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:31.641 14:21:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:31.641 14:21:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:31.641 14:21:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.641 14:21:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:31.641 14:21:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:31.641 14:21:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.641 14:21:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.641 14:21:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:31.641 14:21:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.641 14:21:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.641 14:21:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.641 14:21:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.641 14:21:23 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.641 No valid GPT data, bailing 00:05:31.641 14:21:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.641 14:21:23 -- scripts/common.sh@394 -- # pt= 00:05:31.641 14:21:23 -- scripts/common.sh@395 -- # return 1 00:05:31.641 14:21:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.641 1+0 records in 00:05:31.641 1+0 records out 00:05:31.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212326 s, 494 MB/s 00:05:31.641 14:21:23 -- spdk/autotest.sh@105 -- # sync 00:05:31.641 14:21:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.641 14:21:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.641 14:21:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:34.169 14:21:25 -- spdk/autotest.sh@111 -- # uname -s 00:05:34.169 14:21:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:34.169 14:21:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:34.169 14:21:25 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:35.104 Hugepages 00:05:35.104 node hugesize free / total 00:05:35.104 node0 1048576kB 0 / 0 00:05:35.104 node0 2048kB 0 / 0 00:05:35.104 node1 1048576kB 0 / 0 00:05:35.104 node1 2048kB 0 / 0 00:05:35.104 00:05:35.104 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.104 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:35.104 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:35.104 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:35.104 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:35.104 14:21:26 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.104 14:21:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.104 14:21:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.104 14:21:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:36.480 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:36.480 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:36.480 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.418 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:37.418 14:21:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:38.354 14:21:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:38.354 14:21:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:38.354 14:21:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.354 14:21:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:38.354 14:21:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:38.354 14:21:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:38.354 14:21:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.354 14:21:30 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:38.354 14:21:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:38.354 14:21:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:38.354 14:21:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:38.354 14:21:30 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.730 Waiting for block devices as requested 00:05:39.730 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:39.730 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:39.990 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:39.990 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:39.990 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:39.990 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:40.249 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:40.249 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:40.249 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:40.249 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:40.507 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:40.507 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:40.507 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:40.507 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:40.765 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:40.765 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:40.765 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.023 14:21:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:41.023 14:21:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:41.023 14:21:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:41.023 14:21:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:41.023 14:21:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:41.023 14:21:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:41.023 14:21:32 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:41.023 14:21:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:41.023 14:21:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:41.023 14:21:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:41.023 14:21:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:41.023 14:21:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:41.023 14:21:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:41.023 14:21:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:41.023 14:21:32 -- common/autotest_common.sh@1541 -- # continue 00:05:41.024 14:21:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:41.024 14:21:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.024 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.024 14:21:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:41.024 14:21:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.024 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.024 14:21:32 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.400 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:42.400 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:42.400 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.337 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.337 14:21:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.337 14:21:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.337 14:21:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.337 14:21:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.337 14:21:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:43.337 14:21:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.337 14:21:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:43.337 14:21:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:43.337 14:21:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:43.337 14:21:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.337 14:21:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:43.337 14:21:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:43.337 14:21:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:43.337 14:21:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.337 14:21:35 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.337 14:21:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:43.596 14:21:35 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:43.596 14:21:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:43.596 14:21:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.596 14:21:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:43.596 14:21:35 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:43.596 14:21:35 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:43.596 14:21:35 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:43.596 14:21:35 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:43.596 14:21:35 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:43.596 14:21:35 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:43.596 14:21:35 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1231601 00:05:43.596 14:21:35 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.596 14:21:35 -- common/autotest_common.sh@1583 -- # waitforlisten 1231601 00:05:43.596 14:21:35 -- common/autotest_common.sh@831 -- # '[' -z 1231601 ']' 00:05:43.596 14:21:35 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.596 14:21:35 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.596 14:21:35 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.596 14:21:35 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.596 14:21:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.596 [2024-11-02 14:21:35.460126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:43.596 [2024-11-02 14:21:35.460218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231601 ] 00:05:43.596 [2024-11-02 14:21:35.523454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.596 [2024-11-02 14:21:35.614378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.854 14:21:35 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.854 14:21:35 -- common/autotest_common.sh@864 -- # return 0 00:05:43.854 14:21:35 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:43.854 14:21:35 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:43.854 14:21:35 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:47.137 nvme0n1 00:05:47.137 14:21:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:47.395 [2024-11-02 14:21:39.246400] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:47.395 [2024-11-02 14:21:39.246446] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:47.395 request: 00:05:47.395 { 00:05:47.395 "nvme_ctrlr_name": "nvme0", 00:05:47.395 "password": "test", 00:05:47.395 "method": "bdev_nvme_opal_revert", 00:05:47.395 "req_id": 1 00:05:47.395 } 00:05:47.395 Got JSON-RPC error response 00:05:47.395 response: 00:05:47.395 { 00:05:47.395 "code": -32603, 00:05:47.395 "message": "Internal error" 00:05:47.395 } 00:05:47.395 14:21:39 -- common/autotest_common.sh@1589 -- # true 00:05:47.395 14:21:39 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:47.395 14:21:39 -- common/autotest_common.sh@1593 -- # killprocess 1231601 00:05:47.395 14:21:39 -- common/autotest_common.sh@950 -- # '[' -z 1231601 ']' 00:05:47.395 14:21:39 -- common/autotest_common.sh@954 -- # kill -0 1231601 00:05:47.395 14:21:39 -- common/autotest_common.sh@955 -- # uname 00:05:47.395 14:21:39 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.395 14:21:39 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1231601 00:05:47.395 14:21:39 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.395 14:21:39 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.395 14:21:39 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1231601' 00:05:47.395 killing process with pid 1231601 00:05:47.395 14:21:39 -- common/autotest_common.sh@969 -- # kill 1231601 00:05:47.395 14:21:39 -- common/autotest_common.sh@974 -- # wait 1231601 00:05:49.293 14:21:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:49.293 14:21:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:49.293 14:21:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.293 14:21:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.293 14:21:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:49.293 14:21:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.293 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.293 14:21:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:49.293 14:21:41 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:49.293 14:21:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.293 14:21:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.293 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.293 ************************************ 00:05:49.293 START TEST env 00:05:49.293 ************************************ 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:49.293 * Looking for test storage... 00:05:49.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.293 14:21:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.293 14:21:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.293 14:21:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.293 14:21:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.293 14:21:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.293 14:21:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.293 14:21:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.293 14:21:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.293 14:21:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.293 14:21:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.293 14:21:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.293 14:21:41 env -- scripts/common.sh@344 -- # case "$op" in 00:05:49.293 14:21:41 env -- scripts/common.sh@345 -- # : 1 00:05:49.293 14:21:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.293 14:21:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.293 14:21:41 env -- scripts/common.sh@365 -- # decimal 1 00:05:49.293 14:21:41 env -- scripts/common.sh@353 -- # local d=1 00:05:49.293 14:21:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.293 14:21:41 env -- scripts/common.sh@355 -- # echo 1 00:05:49.293 14:21:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.293 14:21:41 env -- scripts/common.sh@366 -- # decimal 2 00:05:49.293 14:21:41 env -- scripts/common.sh@353 -- # local d=2 00:05:49.293 14:21:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.293 14:21:41 env -- scripts/common.sh@355 -- # echo 2 00:05:49.293 14:21:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.293 14:21:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.293 14:21:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.293 14:21:41 env -- scripts/common.sh@368 -- # return 0 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.293 14:21:41 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.293 --rc genhtml_branch_coverage=1 00:05:49.293 --rc genhtml_function_coverage=1 00:05:49.293 --rc genhtml_legend=1 00:05:49.293 --rc geninfo_all_blocks=1 00:05:49.294 --rc geninfo_unexecuted_blocks=1 00:05:49.294 00:05:49.294 ' 00:05:49.294 14:21:41 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.294 --rc genhtml_branch_coverage=1 00:05:49.294 --rc genhtml_function_coverage=1 00:05:49.294 --rc genhtml_legend=1 00:05:49.294 --rc geninfo_all_blocks=1 00:05:49.294 --rc geninfo_unexecuted_blocks=1 00:05:49.294 00:05:49.294 ' 00:05:49.294 14:21:41 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.294 --rc genhtml_branch_coverage=1 00:05:49.294 --rc genhtml_function_coverage=1 00:05:49.294 --rc genhtml_legend=1 00:05:49.294 --rc geninfo_all_blocks=1 00:05:49.294 --rc geninfo_unexecuted_blocks=1 00:05:49.294 00:05:49.294 ' 00:05:49.294 14:21:41 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.294 --rc genhtml_branch_coverage=1 00:05:49.294 --rc genhtml_function_coverage=1 00:05:49.294 --rc genhtml_legend=1 00:05:49.294 --rc geninfo_all_blocks=1 00:05:49.294 --rc geninfo_unexecuted_blocks=1 00:05:49.294 00:05:49.294 ' 00:05:49.294 14:21:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:49.294 14:21:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.294 14:21:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.294 14:21:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.294 ************************************ 00:05:49.294 START TEST env_memory 00:05:49.294 ************************************ 00:05:49.294 14:21:41 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:49.294 00:05:49.294 00:05:49.294 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.294 http://cunit.sourceforge.net/ 00:05:49.294 00:05:49.294 00:05:49.294 Suite: memory 00:05:49.294 Test: alloc and free memory map ...[2024-11-02 14:21:41.301394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:49.294 passed 00:05:49.294 Test: mem map translation ...[2024-11-02 14:21:41.321570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:49.294 [2024-11-02 14:21:41.321590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:49.294 [2024-11-02 14:21:41.321645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:49.294 [2024-11-02 14:21:41.321657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.553 passed 00:05:49.553 Test: mem map registration ...[2024-11-02 14:21:41.363995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:49.553 [2024-11-02 14:21:41.364015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:49.553 passed 00:05:49.553 Test: mem map adjacent registrations ...passed 00:05:49.553 00:05:49.553 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.553 suites 1 1 n/a 0 0 00:05:49.553 tests 4 4 4 0 0 00:05:49.553 asserts 152 152 152 0 n/a 00:05:49.553 00:05:49.553 Elapsed time = 0.147 seconds 00:05:49.553 00:05:49.553 real 0m0.155s 00:05:49.553 user 0m0.144s 00:05:49.553 sys 0m0.010s 00:05:49.553 14:21:41 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.553 14:21:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:49.553 ************************************ 00:05:49.553 END TEST env_memory 00:05:49.553 ************************************ 00:05:49.553 14:21:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:49.553 14:21:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.553 14:21:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.553 14:21:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.553 ************************************ 00:05:49.553 START TEST env_vtophys 00:05:49.553 ************************************ 00:05:49.553 14:21:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:49.553 EAL: lib.eal log level changed from notice to debug 00:05:49.553 EAL: Detected lcore 0 as core 0 on socket 0 00:05:49.553 EAL: Detected lcore 1 as core 1 on socket 0 00:05:49.553 EAL: Detected lcore 2 as core 2 on socket 0 00:05:49.553 EAL: Detected lcore 3 as core 3 on socket 0 00:05:49.553 EAL: Detected lcore 4 as core 4 on socket 0 00:05:49.553 EAL: Detected lcore 5 as core 5 on socket 0 00:05:49.553 EAL: Detected lcore 6 as core 8 on socket 0 00:05:49.553 EAL: Detected lcore 7 as core 9 on socket 0 00:05:49.553 EAL: Detected lcore 8 as core 10 on socket 0 00:05:49.553 EAL: Detected lcore 9 as core 11 on socket 0 00:05:49.553 EAL: Detected lcore 10 as core 12 on socket 0 00:05:49.553 EAL: Detected lcore 11 as core 13 on socket 0 00:05:49.553 EAL: Detected lcore 12 as core 0 on socket 1 00:05:49.553 EAL: Detected lcore 13 as core 1 on socket 1 00:05:49.553 EAL: Detected lcore 14 as core 2 on socket 1 00:05:49.553 EAL: Detected lcore 15 as core 3 on socket 1 00:05:49.553 EAL: Detected lcore 16 as core 4 on socket 1 00:05:49.553 EAL: Detected lcore 17 as core 5 on socket 1 00:05:49.553 EAL: Detected lcore 18 as core 8 on socket 1 00:05:49.553 EAL: Detected lcore 19 as core 9 on socket 1 00:05:49.553 EAL: Detected lcore 20 as core 10 on socket 1 00:05:49.553 EAL: Detected lcore 21 as core 11 on socket 1 00:05:49.553 EAL: Detected lcore 22 as core 12 on socket 1 00:05:49.553 EAL: Detected lcore 23 as core 13 on socket 1 00:05:49.553 EAL: Detected lcore 24 as core 0 on socket 0 00:05:49.553 EAL: Detected lcore 25 as core 1 on socket 0 00:05:49.553 EAL: Detected lcore 26 as core 2 on socket 0 00:05:49.553 EAL: Detected lcore 27 as core 3 on socket 0 00:05:49.553 EAL: Detected lcore 28 as core 4 on socket 0 00:05:49.553 EAL: Detected lcore 29 as core 5 on socket 0 00:05:49.553 EAL: Detected lcore 30 as core 8 on socket 0 00:05:49.553 EAL: Detected lcore 31 as core 9 on socket 0 00:05:49.553 EAL: Detected lcore 32 as core 10 on socket 0 00:05:49.553 EAL: Detected lcore 33 as core 11 on socket 0 00:05:49.553 EAL: Detected lcore 34 as core 12 on socket 0 00:05:49.553 EAL: Detected lcore 35 as core 13 on socket 0 00:05:49.553 EAL: Detected lcore 36 as core 0 on socket 1 00:05:49.553 EAL: Detected lcore 37 as core 1 on socket 1 00:05:49.553 EAL: Detected lcore 38 as core 2 on socket 1 00:05:49.553 EAL: Detected lcore 39 as core 3 on socket 1 00:05:49.553 EAL: Detected lcore 40 as core 4 on socket 1 00:05:49.553 EAL: Detected lcore 41 as core 5 on socket 1 00:05:49.553 EAL: Detected lcore 42 as core 8 on socket 1 00:05:49.553 EAL: Detected lcore 43 as core 9 on socket 1 00:05:49.553 EAL: Detected lcore 44 as core 10 on socket 1 00:05:49.553 EAL: Detected lcore 45 as core 11 on socket 1 00:05:49.553 EAL: Detected lcore 46 as core 12 on socket 1 00:05:49.553 EAL: Detected lcore 47 as core 13 on socket 1 00:05:49.553 EAL: Maximum logical cores by configuration: 128 00:05:49.553 EAL: Detected CPU lcores: 48 00:05:49.553 EAL: Detected NUMA nodes: 2 00:05:49.553 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:49.553 EAL: Detected shared linkage of DPDK 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:49.553 EAL: Registered [vdev] bus. 00:05:49.553 EAL: bus.vdev log level changed from disabled to notice 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:49.553 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:49.553 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:49.553 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:49.553 EAL: No shared files mode enabled, IPC will be disabled 00:05:49.553 EAL: No shared files mode enabled, IPC is disabled 00:05:49.553 EAL: Bus pci wants IOVA as 'DC' 00:05:49.553 EAL: Bus vdev wants IOVA as 'DC' 00:05:49.553 EAL: Buses did not request a specific IOVA mode. 00:05:49.553 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:49.553 EAL: Selected IOVA mode 'VA' 00:05:49.553 EAL: Probing VFIO support... 00:05:49.553 EAL: IOMMU type 1 (Type 1) is supported 00:05:49.553 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:49.553 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:49.553 EAL: VFIO support initialized 00:05:49.553 EAL: Ask a virtual area of 0x2e000 bytes 00:05:49.553 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:49.553 EAL: Setting up physically contiguous memory... 00:05:49.553 EAL: Setting maximum number of open files to 524288 00:05:49.553 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:49.553 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:49.553 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:49.553 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.553 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:49.553 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.553 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.553 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:49.553 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:49.553 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.553 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:49.553 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.553 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.553 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:49.553 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:49.554 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:49.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.554 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:49.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.554 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:49.554 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:49.554 EAL: Hugepages will be freed exactly as allocated. 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: TSC frequency is ~2700000 KHz 00:05:49.554 EAL: Main lcore 0 is ready (tid=7fbb472b4a00;cpuset=[0]) 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 0 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 2MB 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:49.554 EAL: Mem event callback 'spdk:(nil)' registered 00:05:49.554 00:05:49.554 00:05:49.554 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.554 http://cunit.sourceforge.net/ 00:05:49.554 00:05:49.554 00:05:49.554 Suite: components_suite 00:05:49.554 Test: vtophys_malloc_test ...passed 00:05:49.554 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 4MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was shrunk by 4MB 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 6MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was shrunk by 6MB 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 10MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was shrunk by 10MB 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 18MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was shrunk by 18MB 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 34MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was shrunk by 34MB 00:05:49.554 EAL: Trying to obtain current memory policy. 00:05:49.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.554 EAL: Restoring previous memory policy: 4 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.554 EAL: request: mp_malloc_sync 00:05:49.554 EAL: No shared files mode enabled, IPC is disabled 00:05:49.554 EAL: Heap on socket 0 was expanded by 66MB 00:05:49.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.812 EAL: request: mp_malloc_sync 00:05:49.812 EAL: No shared files mode enabled, IPC is disabled 00:05:49.812 EAL: Heap on socket 0 was shrunk by 66MB 00:05:49.812 EAL: Trying to obtain current memory policy. 00:05:49.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.812 EAL: Restoring previous memory policy: 4 00:05:49.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.812 EAL: request: mp_malloc_sync 00:05:49.812 EAL: No shared files mode enabled, IPC is disabled 00:05:49.812 EAL: Heap on socket 0 was expanded by 130MB 00:05:49.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.812 EAL: request: mp_malloc_sync 00:05:49.812 EAL: No shared files mode enabled, IPC is disabled 00:05:49.812 EAL: Heap on socket 0 was shrunk by 130MB 00:05:49.812 EAL: Trying to obtain current memory policy. 00:05:49.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.812 EAL: Restoring previous memory policy: 4 00:05:49.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.812 EAL: request: mp_malloc_sync 00:05:49.812 EAL: No shared files mode enabled, IPC is disabled 00:05:49.812 EAL: Heap on socket 0 was expanded by 258MB 00:05:49.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.070 EAL: request: mp_malloc_sync 00:05:50.070 EAL: No shared files mode enabled, IPC is disabled 00:05:50.070 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.070 EAL: Trying to obtain current memory policy. 00:05:50.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.070 EAL: Restoring previous memory policy: 4 00:05:50.070 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.070 EAL: request: mp_malloc_sync 00:05:50.070 EAL: No shared files mode enabled, IPC is disabled 00:05:50.070 EAL: Heap on socket 0 was expanded by 514MB 00:05:50.070 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.329 EAL: request: mp_malloc_sync 00:05:50.329 EAL: No shared files mode enabled, IPC is disabled 00:05:50.329 EAL: Heap on socket 0 was shrunk by 514MB 00:05:50.329 EAL: Trying to obtain current memory policy. 00:05:50.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.587 EAL: Restoring previous memory policy: 4 00:05:50.587 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.587 EAL: request: mp_malloc_sync 00:05:50.587 EAL: No shared files mode enabled, IPC is disabled 00:05:50.587 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.846 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.104 EAL: request: mp_malloc_sync 00:05:51.104 EAL: No shared files mode enabled, IPC is disabled 00:05:51.104 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.104 passed 00:05:51.104 00:05:51.104 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.104 suites 1 1 n/a 0 0 00:05:51.104 tests 2 2 2 0 0 00:05:51.104 asserts 497 497 497 0 n/a 00:05:51.104 00:05:51.104 Elapsed time = 1.362 seconds 00:05:51.104 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.104 EAL: request: mp_malloc_sync 00:05:51.104 EAL: No shared files mode enabled, IPC is disabled 00:05:51.104 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.104 EAL: No shared files mode enabled, IPC is disabled 00:05:51.104 EAL: No shared files mode enabled, IPC is disabled 00:05:51.104 EAL: No shared files mode enabled, IPC is disabled 00:05:51.104 00:05:51.104 real 0m1.484s 00:05:51.104 user 0m0.839s 00:05:51.104 sys 0m0.609s 00:05:51.104 14:21:42 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.104 14:21:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:51.104 ************************************ 00:05:51.104 END TEST env_vtophys 00:05:51.104 ************************************ 00:05:51.104 14:21:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.104 14:21:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.104 14:21:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.104 14:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.104 ************************************ 00:05:51.104 START TEST env_pci 00:05:51.104 ************************************ 00:05:51.104 14:21:43 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.104 00:05:51.104 00:05:51.104 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.104 http://cunit.sourceforge.net/ 00:05:51.104 00:05:51.104 00:05:51.104 Suite: pci 00:05:51.104 Test: pci_hook ...[2024-11-02 14:21:43.019639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1232504 has claimed it 00:05:51.104 EAL: Cannot find device (10000:00:01.0) 00:05:51.104 EAL: Failed to attach device on primary process 00:05:51.104 passed 00:05:51.104 00:05:51.104 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.104 suites 1 1 n/a 0 0 00:05:51.104 tests 1 1 1 0 0 00:05:51.104 asserts 25 25 25 0 n/a 00:05:51.104 00:05:51.104 Elapsed time = 0.021 seconds 00:05:51.104 00:05:51.104 real 0m0.034s 00:05:51.104 user 0m0.008s 00:05:51.104 sys 0m0.025s 00:05:51.104 14:21:43 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.104 14:21:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:51.104 ************************************ 00:05:51.104 END TEST env_pci 00:05:51.104 ************************************ 00:05:51.104 14:21:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:51.104 14:21:43 env -- env/env.sh@15 -- # uname 00:05:51.104 14:21:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:51.104 14:21:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:51.104 14:21:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.104 14:21:43 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:51.104 14:21:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.104 14:21:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.104 ************************************ 00:05:51.104 START TEST env_dpdk_post_init 00:05:51.104 ************************************ 00:05:51.104 14:21:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.104 EAL: Detected CPU lcores: 48 00:05:51.104 EAL: Detected NUMA nodes: 2 00:05:51.104 EAL: Detected shared linkage of DPDK 00:05:51.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:51.104 EAL: Selected IOVA mode 'VA' 00:05:51.104 EAL: VFIO support initialized 00:05:51.104 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:51.362 EAL: Using IOMMU type 1 (Type 1) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:51.362 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:51.363 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:51.363 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:51.363 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:51.363 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:51.363 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:52.313 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:55.592 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:55.592 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:55.592 Starting DPDK initialization... 00:05:55.592 Starting SPDK post initialization... 00:05:55.592 SPDK NVMe probe 00:05:55.592 Attaching to 0000:88:00.0 00:05:55.592 Attached to 0000:88:00.0 00:05:55.592 Cleaning up... 00:05:55.592 00:05:55.592 real 0m4.403s 00:05:55.592 user 0m3.265s 00:05:55.592 sys 0m0.198s 00:05:55.592 14:21:47 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.592 14:21:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.592 ************************************ 00:05:55.592 END TEST env_dpdk_post_init 00:05:55.592 ************************************ 00:05:55.592 14:21:47 env -- env/env.sh@26 -- # uname 00:05:55.592 14:21:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:55.592 14:21:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.592 14:21:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.592 14:21:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.592 14:21:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.592 ************************************ 00:05:55.592 START TEST env_mem_callbacks 00:05:55.592 ************************************ 00:05:55.592 14:21:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.592 EAL: Detected CPU lcores: 48 00:05:55.592 EAL: Detected NUMA nodes: 2 00:05:55.592 EAL: Detected shared linkage of DPDK 00:05:55.592 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.592 EAL: Selected IOVA mode 'VA' 00:05:55.592 EAL: VFIO support initialized 00:05:55.592 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.592 00:05:55.592 00:05:55.592 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.592 http://cunit.sourceforge.net/ 00:05:55.592 00:05:55.592 00:05:55.592 Suite: memory 00:05:55.592 Test: test ... 00:05:55.592 register 0x200000200000 2097152 00:05:55.592 malloc 3145728 00:05:55.592 register 0x200000400000 4194304 00:05:55.592 buf 0x200000500000 len 3145728 PASSED 00:05:55.592 malloc 64 00:05:55.592 buf 0x2000004fff40 len 64 PASSED 00:05:55.592 malloc 4194304 00:05:55.592 register 0x200000800000 6291456 00:05:55.592 buf 0x200000a00000 len 4194304 PASSED 00:05:55.592 free 0x200000500000 3145728 00:05:55.592 free 0x2000004fff40 64 00:05:55.592 unregister 0x200000400000 4194304 PASSED 00:05:55.592 free 0x200000a00000 4194304 00:05:55.592 unregister 0x200000800000 6291456 PASSED 00:05:55.592 malloc 8388608 00:05:55.592 register 0x200000400000 10485760 00:05:55.592 buf 0x200000600000 len 8388608 PASSED 00:05:55.592 free 0x200000600000 8388608 00:05:55.592 unregister 0x200000400000 10485760 PASSED 00:05:55.592 passed 00:05:55.592 00:05:55.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.592 suites 1 1 n/a 0 0 00:05:55.592 tests 1 1 1 0 0 00:05:55.592 asserts 15 15 15 0 n/a 00:05:55.592 00:05:55.592 Elapsed time = 0.005 seconds 00:05:55.592 00:05:55.592 real 0m0.048s 00:05:55.592 user 0m0.011s 00:05:55.592 sys 0m0.037s 00:05:55.592 14:21:47 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.592 14:21:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:55.592 ************************************ 00:05:55.592 END TEST env_mem_callbacks 00:05:55.592 ************************************ 00:05:55.592 00:05:55.592 real 0m6.520s 00:05:55.592 user 0m4.472s 00:05:55.592 sys 0m1.091s 00:05:55.592 14:21:47 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.592 14:21:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.592 ************************************ 00:05:55.592 END TEST env 00:05:55.592 ************************************ 00:05:55.592 14:21:47 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.592 14:21:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.592 14:21:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.592 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.851 ************************************ 00:05:55.851 START TEST rpc 00:05:55.851 ************************************ 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.851 * Looking for test storage... 00:05:55.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.851 14:21:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.851 14:21:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.851 14:21:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.851 14:21:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.851 14:21:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.851 14:21:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.851 14:21:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.851 14:21:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.851 14:21:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.851 14:21:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.851 14:21:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.851 14:21:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.851 14:21:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.851 14:21:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.851 14:21:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.851 --rc genhtml_branch_coverage=1 00:05:55.851 --rc genhtml_function_coverage=1 00:05:55.851 --rc genhtml_legend=1 00:05:55.851 --rc geninfo_all_blocks=1 00:05:55.851 --rc geninfo_unexecuted_blocks=1 00:05:55.851 00:05:55.851 ' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.851 --rc genhtml_branch_coverage=1 00:05:55.851 --rc genhtml_function_coverage=1 00:05:55.851 --rc genhtml_legend=1 00:05:55.851 --rc geninfo_all_blocks=1 00:05:55.851 --rc geninfo_unexecuted_blocks=1 00:05:55.851 00:05:55.851 ' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.851 --rc genhtml_branch_coverage=1 00:05:55.851 --rc genhtml_function_coverage=1 00:05:55.851 --rc genhtml_legend=1 00:05:55.851 --rc geninfo_all_blocks=1 00:05:55.851 --rc geninfo_unexecuted_blocks=1 00:05:55.851 00:05:55.851 ' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.851 --rc genhtml_branch_coverage=1 00:05:55.851 --rc genhtml_function_coverage=1 00:05:55.851 --rc genhtml_legend=1 00:05:55.851 --rc geninfo_all_blocks=1 00:05:55.851 --rc geninfo_unexecuted_blocks=1 00:05:55.851 00:05:55.851 ' 00:05:55.851 14:21:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1233284 00:05:55.851 14:21:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:55.851 14:21:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.851 14:21:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1233284 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@831 -- # '[' -z 1233284 ']' 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.851 14:21:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.851 [2024-11-02 14:21:47.861500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.851 [2024-11-02 14:21:47.861592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233284 ] 00:05:56.109 [2024-11-02 14:21:47.923702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.109 [2024-11-02 14:21:48.014892] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:56.110 [2024-11-02 14:21:48.014958] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1233284' to capture a snapshot of events at runtime. 00:05:56.110 [2024-11-02 14:21:48.014991] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.110 [2024-11-02 14:21:48.015002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.110 [2024-11-02 14:21:48.015012] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1233284 for offline analysis/debug. 00:05:56.110 [2024-11-02 14:21:48.015040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.368 14:21:48 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.368 14:21:48 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.368 14:21:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.368 14:21:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.368 14:21:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:56.368 14:21:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:56.368 14:21:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.368 14:21:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.368 14:21:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.368 ************************************ 00:05:56.368 START TEST rpc_integrity 00:05:56.368 ************************************ 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.368 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.368 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.368 { 00:05:56.368 "name": "Malloc0", 00:05:56.368 "aliases": [ 00:05:56.368 "8f5326ae-1846-427c-94c4-c45a1eccd51f" 00:05:56.368 ], 00:05:56.368 "product_name": "Malloc disk", 00:05:56.368 "block_size": 512, 00:05:56.368 "num_blocks": 16384, 00:05:56.368 "uuid": "8f5326ae-1846-427c-94c4-c45a1eccd51f", 00:05:56.368 "assigned_rate_limits": { 00:05:56.368 "rw_ios_per_sec": 0, 00:05:56.368 "rw_mbytes_per_sec": 0, 00:05:56.368 "r_mbytes_per_sec": 0, 00:05:56.368 "w_mbytes_per_sec": 0 00:05:56.368 }, 00:05:56.368 "claimed": false, 00:05:56.368 "zoned": false, 00:05:56.368 "supported_io_types": { 00:05:56.368 "read": true, 00:05:56.368 "write": true, 00:05:56.368 "unmap": true, 00:05:56.368 "flush": true, 00:05:56.368 "reset": true, 00:05:56.368 "nvme_admin": false, 00:05:56.368 "nvme_io": false, 00:05:56.368 "nvme_io_md": false, 00:05:56.368 "write_zeroes": true, 00:05:56.368 "zcopy": true, 00:05:56.368 "get_zone_info": false, 00:05:56.368 "zone_management": false, 00:05:56.368 "zone_append": false, 00:05:56.368 "compare": false, 00:05:56.368 "compare_and_write": false, 00:05:56.368 "abort": true, 00:05:56.368 "seek_hole": false, 00:05:56.368 "seek_data": false, 00:05:56.368 "copy": true, 00:05:56.368 "nvme_iov_md": false 00:05:56.368 }, 00:05:56.368 "memory_domains": [ 00:05:56.368 { 00:05:56.368 "dma_device_id": "system", 00:05:56.368 "dma_device_type": 1 00:05:56.368 }, 00:05:56.368 { 00:05:56.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.369 "dma_device_type": 2 00:05:56.369 } 00:05:56.369 ], 00:05:56.369 "driver_specific": {} 00:05:56.369 } 00:05:56.369 ]' 00:05:56.369 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.369 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.369 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:56.369 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.369 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.369 [2024-11-02 14:21:48.421024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:56.369 [2024-11-02 14:21:48.421070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.369 [2024-11-02 14:21:48.421095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200f2c0 00:05:56.369 [2024-11-02 14:21:48.421111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.369 [2024-11-02 14:21:48.422764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.369 [2024-11-02 14:21:48.422802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.627 Passthru0 00:05:56.627 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.627 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.627 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.627 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.627 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.627 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.627 { 00:05:56.627 "name": "Malloc0", 00:05:56.627 "aliases": [ 00:05:56.627 "8f5326ae-1846-427c-94c4-c45a1eccd51f" 00:05:56.627 ], 00:05:56.627 "product_name": "Malloc disk", 00:05:56.627 "block_size": 512, 00:05:56.627 "num_blocks": 16384, 00:05:56.627 "uuid": "8f5326ae-1846-427c-94c4-c45a1eccd51f", 00:05:56.627 "assigned_rate_limits": { 00:05:56.627 "rw_ios_per_sec": 0, 00:05:56.627 "rw_mbytes_per_sec": 0, 00:05:56.627 "r_mbytes_per_sec": 0, 00:05:56.627 "w_mbytes_per_sec": 0 00:05:56.627 }, 00:05:56.627 "claimed": true, 00:05:56.627 "claim_type": "exclusive_write", 00:05:56.627 "zoned": false, 00:05:56.627 "supported_io_types": { 00:05:56.627 "read": true, 00:05:56.627 "write": true, 00:05:56.627 "unmap": true, 00:05:56.627 "flush": true, 00:05:56.627 "reset": true, 00:05:56.627 "nvme_admin": false, 00:05:56.627 "nvme_io": false, 00:05:56.627 "nvme_io_md": false, 00:05:56.627 "write_zeroes": true, 00:05:56.627 "zcopy": true, 00:05:56.627 "get_zone_info": false, 00:05:56.627 "zone_management": false, 00:05:56.627 "zone_append": false, 00:05:56.627 "compare": false, 00:05:56.627 "compare_and_write": false, 00:05:56.627 "abort": true, 00:05:56.627 "seek_hole": false, 00:05:56.627 "seek_data": false, 00:05:56.627 "copy": true, 00:05:56.627 "nvme_iov_md": false 00:05:56.627 }, 00:05:56.627 "memory_domains": [ 00:05:56.627 { 00:05:56.627 "dma_device_id": "system", 00:05:56.627 "dma_device_type": 1 00:05:56.627 }, 00:05:56.627 { 00:05:56.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.627 "dma_device_type": 2 00:05:56.627 } 00:05:56.627 ], 00:05:56.627 "driver_specific": {} 00:05:56.627 }, 00:05:56.627 { 00:05:56.627 "name": "Passthru0", 00:05:56.627 "aliases": [ 00:05:56.627 "fcd27535-df69-5ef5-a9b3-a81d179ea8d9" 00:05:56.627 ], 00:05:56.627 "product_name": "passthru", 00:05:56.627 "block_size": 512, 00:05:56.627 "num_blocks": 16384, 00:05:56.627 "uuid": "fcd27535-df69-5ef5-a9b3-a81d179ea8d9", 00:05:56.627 "assigned_rate_limits": { 00:05:56.627 "rw_ios_per_sec": 0, 00:05:56.627 "rw_mbytes_per_sec": 0, 00:05:56.627 "r_mbytes_per_sec": 0, 00:05:56.627 "w_mbytes_per_sec": 0 00:05:56.627 }, 00:05:56.627 "claimed": false, 00:05:56.627 "zoned": false, 00:05:56.627 "supported_io_types": { 00:05:56.627 "read": true, 00:05:56.627 "write": true, 00:05:56.627 "unmap": true, 00:05:56.627 "flush": true, 00:05:56.627 "reset": true, 00:05:56.627 "nvme_admin": false, 00:05:56.627 "nvme_io": false, 00:05:56.627 "nvme_io_md": false, 00:05:56.627 "write_zeroes": true, 00:05:56.627 "zcopy": true, 00:05:56.627 "get_zone_info": false, 00:05:56.627 "zone_management": false, 00:05:56.627 "zone_append": false, 00:05:56.627 "compare": false, 00:05:56.627 "compare_and_write": false, 00:05:56.627 "abort": true, 00:05:56.627 "seek_hole": false, 00:05:56.627 "seek_data": false, 00:05:56.627 "copy": true, 00:05:56.627 "nvme_iov_md": false 00:05:56.627 }, 00:05:56.627 "memory_domains": [ 00:05:56.627 { 00:05:56.627 "dma_device_id": "system", 00:05:56.627 "dma_device_type": 1 00:05:56.627 }, 00:05:56.627 { 00:05:56.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.627 "dma_device_type": 2 00:05:56.627 } 00:05:56.627 ], 00:05:56.627 "driver_specific": { 00:05:56.627 "passthru": { 00:05:56.627 "name": "Passthru0", 00:05:56.628 "base_bdev_name": "Malloc0" 00:05:56.628 } 00:05:56.628 } 00:05:56.628 } 00:05:56.628 ]' 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.628 14:21:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.628 00:05:56.628 real 0m0.226s 00:05:56.628 user 0m0.146s 00:05:56.628 sys 0m0.023s 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 ************************************ 00:05:56.628 END TEST rpc_integrity 00:05:56.628 ************************************ 00:05:56.628 14:21:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:56.628 14:21:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.628 14:21:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.628 14:21:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 ************************************ 00:05:56.628 START TEST rpc_plugins 00:05:56.628 ************************************ 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:56.628 { 00:05:56.628 "name": "Malloc1", 00:05:56.628 "aliases": [ 00:05:56.628 "043a24a9-c034-4d3f-a3d8-75a5b9977844" 00:05:56.628 ], 00:05:56.628 "product_name": "Malloc disk", 00:05:56.628 "block_size": 4096, 00:05:56.628 "num_blocks": 256, 00:05:56.628 "uuid": "043a24a9-c034-4d3f-a3d8-75a5b9977844", 00:05:56.628 "assigned_rate_limits": { 00:05:56.628 "rw_ios_per_sec": 0, 00:05:56.628 "rw_mbytes_per_sec": 0, 00:05:56.628 "r_mbytes_per_sec": 0, 00:05:56.628 "w_mbytes_per_sec": 0 00:05:56.628 }, 00:05:56.628 "claimed": false, 00:05:56.628 "zoned": false, 00:05:56.628 "supported_io_types": { 00:05:56.628 "read": true, 00:05:56.628 "write": true, 00:05:56.628 "unmap": true, 00:05:56.628 "flush": true, 00:05:56.628 "reset": true, 00:05:56.628 "nvme_admin": false, 00:05:56.628 "nvme_io": false, 00:05:56.628 "nvme_io_md": false, 00:05:56.628 "write_zeroes": true, 00:05:56.628 "zcopy": true, 00:05:56.628 "get_zone_info": false, 00:05:56.628 "zone_management": false, 00:05:56.628 "zone_append": false, 00:05:56.628 "compare": false, 00:05:56.628 "compare_and_write": false, 00:05:56.628 "abort": true, 00:05:56.628 "seek_hole": false, 00:05:56.628 "seek_data": false, 00:05:56.628 "copy": true, 00:05:56.628 "nvme_iov_md": false 00:05:56.628 }, 00:05:56.628 "memory_domains": [ 00:05:56.628 { 00:05:56.628 "dma_device_id": "system", 00:05:56.628 "dma_device_type": 1 00:05:56.628 }, 00:05:56.628 { 00:05:56.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.628 "dma_device_type": 2 00:05:56.628 } 00:05:56.628 ], 00:05:56.628 "driver_specific": {} 00:05:56.628 } 00:05:56.628 ]' 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.628 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:56.628 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:56.886 14:21:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:56.886 00:05:56.886 real 0m0.111s 00:05:56.886 user 0m0.068s 00:05:56.886 sys 0m0.011s 00:05:56.886 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.886 14:21:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 ************************************ 00:05:56.886 END TEST rpc_plugins 00:05:56.886 ************************************ 00:05:56.886 14:21:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:56.886 14:21:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.886 14:21:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.886 14:21:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 ************************************ 00:05:56.886 START TEST rpc_trace_cmd_test 00:05:56.886 ************************************ 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:56.886 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1233284", 00:05:56.886 "tpoint_group_mask": "0x8", 00:05:56.886 "iscsi_conn": { 00:05:56.886 "mask": "0x2", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "scsi": { 00:05:56.886 "mask": "0x4", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "bdev": { 00:05:56.886 "mask": "0x8", 00:05:56.886 "tpoint_mask": "0xffffffffffffffff" 00:05:56.886 }, 00:05:56.886 "nvmf_rdma": { 00:05:56.886 "mask": "0x10", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "nvmf_tcp": { 00:05:56.886 "mask": "0x20", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "ftl": { 00:05:56.886 "mask": "0x40", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "blobfs": { 00:05:56.886 "mask": "0x80", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "dsa": { 00:05:56.886 "mask": "0x200", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "thread": { 00:05:56.886 "mask": "0x400", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "nvme_pcie": { 00:05:56.886 "mask": "0x800", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "iaa": { 00:05:56.886 "mask": "0x1000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "nvme_tcp": { 00:05:56.886 "mask": "0x2000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "bdev_nvme": { 00:05:56.886 "mask": "0x4000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "sock": { 00:05:56.886 "mask": "0x8000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "blob": { 00:05:56.886 "mask": "0x10000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 }, 00:05:56.886 "bdev_raid": { 00:05:56.886 "mask": "0x20000", 00:05:56.886 "tpoint_mask": "0x0" 00:05:56.886 } 00:05:56.886 }' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.886 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:57.145 14:21:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:57.145 00:05:57.145 real 0m0.204s 00:05:57.145 user 0m0.179s 00:05:57.145 sys 0m0.016s 00:05:57.145 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.145 14:21:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 ************************************ 00:05:57.145 END TEST rpc_trace_cmd_test 00:05:57.145 ************************************ 00:05:57.145 14:21:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:57.145 14:21:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:57.145 14:21:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:57.145 14:21:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.145 14:21:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.145 14:21:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 ************************************ 00:05:57.145 START TEST rpc_daemon_integrity 00:05:57.145 ************************************ 00:05:57.145 14:21:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:57.145 14:21:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.145 14:21:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.145 14:21:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.145 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.145 { 00:05:57.145 "name": "Malloc2", 00:05:57.145 "aliases": [ 00:05:57.145 "d3f139a4-3e1c-42a1-b6ef-596b6c9db82d" 00:05:57.145 ], 00:05:57.145 "product_name": "Malloc disk", 00:05:57.145 "block_size": 512, 00:05:57.145 "num_blocks": 16384, 00:05:57.145 "uuid": "d3f139a4-3e1c-42a1-b6ef-596b6c9db82d", 00:05:57.145 "assigned_rate_limits": { 00:05:57.145 "rw_ios_per_sec": 0, 00:05:57.145 "rw_mbytes_per_sec": 0, 00:05:57.145 "r_mbytes_per_sec": 0, 00:05:57.145 "w_mbytes_per_sec": 0 00:05:57.145 }, 00:05:57.145 "claimed": false, 00:05:57.145 "zoned": false, 00:05:57.145 "supported_io_types": { 00:05:57.145 "read": true, 00:05:57.145 "write": true, 00:05:57.145 "unmap": true, 00:05:57.145 "flush": true, 00:05:57.145 "reset": true, 00:05:57.145 "nvme_admin": false, 00:05:57.145 "nvme_io": false, 00:05:57.145 "nvme_io_md": false, 00:05:57.145 "write_zeroes": true, 00:05:57.145 "zcopy": true, 00:05:57.145 "get_zone_info": false, 00:05:57.145 "zone_management": false, 00:05:57.145 "zone_append": false, 00:05:57.145 "compare": false, 00:05:57.145 "compare_and_write": false, 00:05:57.146 "abort": true, 00:05:57.146 "seek_hole": false, 00:05:57.146 "seek_data": false, 00:05:57.146 "copy": true, 00:05:57.146 "nvme_iov_md": false 00:05:57.146 }, 00:05:57.146 "memory_domains": [ 00:05:57.146 { 00:05:57.146 "dma_device_id": "system", 00:05:57.146 "dma_device_type": 1 00:05:57.146 }, 00:05:57.146 { 00:05:57.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.146 "dma_device_type": 2 00:05:57.146 } 00:05:57.146 ], 00:05:57.146 "driver_specific": {} 00:05:57.146 } 00:05:57.146 ]' 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 [2024-11-02 14:21:49.103408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.146 [2024-11-02 14:21:49.103449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.146 [2024-11-02 14:21:49.103476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200ef90 00:05:57.146 [2024-11-02 14:21:49.103491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.146 [2024-11-02 14:21:49.104856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.146 [2024-11-02 14:21:49.104885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.146 Passthru0 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.146 { 00:05:57.146 "name": "Malloc2", 00:05:57.146 "aliases": [ 00:05:57.146 "d3f139a4-3e1c-42a1-b6ef-596b6c9db82d" 00:05:57.146 ], 00:05:57.146 "product_name": "Malloc disk", 00:05:57.146 "block_size": 512, 00:05:57.146 "num_blocks": 16384, 00:05:57.146 "uuid": "d3f139a4-3e1c-42a1-b6ef-596b6c9db82d", 00:05:57.146 "assigned_rate_limits": { 00:05:57.146 "rw_ios_per_sec": 0, 00:05:57.146 "rw_mbytes_per_sec": 0, 00:05:57.146 "r_mbytes_per_sec": 0, 00:05:57.146 "w_mbytes_per_sec": 0 00:05:57.146 }, 00:05:57.146 "claimed": true, 00:05:57.146 "claim_type": "exclusive_write", 00:05:57.146 "zoned": false, 00:05:57.146 "supported_io_types": { 00:05:57.146 "read": true, 00:05:57.146 "write": true, 00:05:57.146 "unmap": true, 00:05:57.146 "flush": true, 00:05:57.146 "reset": true, 00:05:57.146 "nvme_admin": false, 00:05:57.146 "nvme_io": false, 00:05:57.146 "nvme_io_md": false, 00:05:57.146 "write_zeroes": true, 00:05:57.146 "zcopy": true, 00:05:57.146 "get_zone_info": false, 00:05:57.146 "zone_management": false, 00:05:57.146 "zone_append": false, 00:05:57.146 "compare": false, 00:05:57.146 "compare_and_write": false, 00:05:57.146 "abort": true, 00:05:57.146 "seek_hole": false, 00:05:57.146 "seek_data": false, 00:05:57.146 "copy": true, 00:05:57.146 "nvme_iov_md": false 00:05:57.146 }, 00:05:57.146 "memory_domains": [ 00:05:57.146 { 00:05:57.146 "dma_device_id": "system", 00:05:57.146 "dma_device_type": 1 00:05:57.146 }, 00:05:57.146 { 00:05:57.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.146 "dma_device_type": 2 00:05:57.146 } 00:05:57.146 ], 00:05:57.146 "driver_specific": {} 00:05:57.146 }, 00:05:57.146 { 00:05:57.146 "name": "Passthru0", 00:05:57.146 "aliases": [ 00:05:57.146 "c00276dd-767b-52f8-9b87-e4ca9f973a24" 00:05:57.146 ], 00:05:57.146 "product_name": "passthru", 00:05:57.146 "block_size": 512, 00:05:57.146 "num_blocks": 16384, 00:05:57.146 "uuid": "c00276dd-767b-52f8-9b87-e4ca9f973a24", 00:05:57.146 "assigned_rate_limits": { 00:05:57.146 "rw_ios_per_sec": 0, 00:05:57.146 "rw_mbytes_per_sec": 0, 00:05:57.146 "r_mbytes_per_sec": 0, 00:05:57.146 "w_mbytes_per_sec": 0 00:05:57.146 }, 00:05:57.146 "claimed": false, 00:05:57.146 "zoned": false, 00:05:57.146 "supported_io_types": { 00:05:57.146 "read": true, 00:05:57.146 "write": true, 00:05:57.146 "unmap": true, 00:05:57.146 "flush": true, 00:05:57.146 "reset": true, 00:05:57.146 "nvme_admin": false, 00:05:57.146 "nvme_io": false, 00:05:57.146 "nvme_io_md": false, 00:05:57.146 "write_zeroes": true, 00:05:57.146 "zcopy": true, 00:05:57.146 "get_zone_info": false, 00:05:57.146 "zone_management": false, 00:05:57.146 "zone_append": false, 00:05:57.146 "compare": false, 00:05:57.146 "compare_and_write": false, 00:05:57.146 "abort": true, 00:05:57.146 "seek_hole": false, 00:05:57.146 "seek_data": false, 00:05:57.146 "copy": true, 00:05:57.146 "nvme_iov_md": false 00:05:57.146 }, 00:05:57.146 "memory_domains": [ 00:05:57.146 { 00:05:57.146 "dma_device_id": "system", 00:05:57.146 "dma_device_type": 1 00:05:57.146 }, 00:05:57.146 { 00:05:57.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.146 "dma_device_type": 2 00:05:57.146 } 00:05:57.146 ], 00:05:57.146 "driver_specific": { 00:05:57.146 "passthru": { 00:05:57.146 "name": "Passthru0", 00:05:57.146 "base_bdev_name": "Malloc2" 00:05:57.146 } 00:05:57.146 } 00:05:57.146 } 00:05:57.146 ]' 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.146 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.404 14:21:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.405 00:05:57.405 real 0m0.229s 00:05:57.405 user 0m0.148s 00:05:57.405 sys 0m0.027s 00:05:57.405 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.405 14:21:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.405 ************************************ 00:05:57.405 END TEST rpc_daemon_integrity 00:05:57.405 ************************************ 00:05:57.405 14:21:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.405 14:21:49 rpc -- rpc/rpc.sh@84 -- # killprocess 1233284 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 1233284 ']' 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@954 -- # kill -0 1233284 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@955 -- # uname 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1233284 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1233284' 00:05:57.405 killing process with pid 1233284 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@969 -- # kill 1233284 00:05:57.405 14:21:49 rpc -- common/autotest_common.sh@974 -- # wait 1233284 00:05:57.662 00:05:57.662 real 0m2.051s 00:05:57.662 user 0m2.512s 00:05:57.662 sys 0m0.664s 00:05:57.662 14:21:49 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.662 14:21:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.662 ************************************ 00:05:57.662 END TEST rpc 00:05:57.662 ************************************ 00:05:57.940 14:21:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:57.940 14:21:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.940 14:21:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.940 14:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.940 ************************************ 00:05:57.940 START TEST skip_rpc 00:05:57.940 ************************************ 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:57.940 * Looking for test storage... 00:05:57.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.940 14:21:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.940 --rc genhtml_branch_coverage=1 00:05:57.940 --rc genhtml_function_coverage=1 00:05:57.940 --rc genhtml_legend=1 00:05:57.940 --rc geninfo_all_blocks=1 00:05:57.940 --rc geninfo_unexecuted_blocks=1 00:05:57.940 00:05:57.940 ' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.940 --rc genhtml_branch_coverage=1 00:05:57.940 --rc genhtml_function_coverage=1 00:05:57.940 --rc genhtml_legend=1 00:05:57.940 --rc geninfo_all_blocks=1 00:05:57.940 --rc geninfo_unexecuted_blocks=1 00:05:57.940 00:05:57.940 ' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.940 --rc genhtml_branch_coverage=1 00:05:57.940 --rc genhtml_function_coverage=1 00:05:57.940 --rc genhtml_legend=1 00:05:57.940 --rc geninfo_all_blocks=1 00:05:57.940 --rc geninfo_unexecuted_blocks=1 00:05:57.940 00:05:57.940 ' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.940 --rc genhtml_branch_coverage=1 00:05:57.940 --rc genhtml_function_coverage=1 00:05:57.940 --rc genhtml_legend=1 00:05:57.940 --rc geninfo_all_blocks=1 00:05:57.940 --rc geninfo_unexecuted_blocks=1 00:05:57.940 00:05:57.940 ' 00:05:57.940 14:21:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.940 14:21:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:57.940 14:21:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.940 14:21:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.940 ************************************ 00:05:57.940 START TEST skip_rpc 00:05:57.940 ************************************ 00:05:57.940 14:21:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:57.940 14:21:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1233660 00:05:57.940 14:21:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:57.940 14:21:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.940 14:21:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.265 [2024-11-02 14:21:49.990501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:58.265 [2024-11-02 14:21:49.990605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233660 ] 00:05:58.265 [2024-11-02 14:21:50.055115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.265 [2024-11-02 14:21:50.146802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1233660 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1233660 ']' 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1233660 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1233660 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1233660' 00:06:03.525 killing process with pid 1233660 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1233660 00:06:03.525 14:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1233660 00:06:03.525 00:06:03.525 real 0m5.485s 00:06:03.525 user 0m5.153s 00:06:03.525 sys 0m0.344s 00:06:03.525 14:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.525 14:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.525 ************************************ 00:06:03.525 END TEST skip_rpc 00:06:03.525 ************************************ 00:06:03.525 14:21:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:03.525 14:21:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.525 14:21:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.525 14:21:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.525 ************************************ 00:06:03.525 START TEST skip_rpc_with_json 00:06:03.525 ************************************ 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1234306 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1234306 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1234306 ']' 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.525 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.525 [2024-11-02 14:21:55.525167] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:03.525 [2024-11-02 14:21:55.525269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234306 ] 00:06:03.783 [2024-11-02 14:21:55.587099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.783 [2024-11-02 14:21:55.681203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.041 [2024-11-02 14:21:55.956167] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:04.041 request: 00:06:04.041 { 00:06:04.041 "trtype": "tcp", 00:06:04.041 "method": "nvmf_get_transports", 00:06:04.041 "req_id": 1 00:06:04.041 } 00:06:04.041 Got JSON-RPC error response 00:06:04.041 response: 00:06:04.041 { 00:06:04.041 "code": -19, 00:06:04.041 "message": "No such device" 00:06:04.041 } 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.041 [2024-11-02 14:21:55.964317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.041 14:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.300 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.300 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.300 { 00:06:04.300 "subsystems": [ 00:06:04.300 { 00:06:04.300 "subsystem": "fsdev", 00:06:04.300 "config": [ 00:06:04.300 { 00:06:04.300 "method": "fsdev_set_opts", 00:06:04.300 "params": { 00:06:04.300 "fsdev_io_pool_size": 65535, 00:06:04.300 "fsdev_io_cache_size": 256 00:06:04.300 } 00:06:04.300 } 00:06:04.300 ] 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "subsystem": "vfio_user_target", 00:06:04.300 "config": null 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "subsystem": "keyring", 00:06:04.300 "config": [] 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "subsystem": "iobuf", 00:06:04.300 "config": [ 00:06:04.300 { 00:06:04.300 "method": "iobuf_set_options", 00:06:04.300 "params": { 00:06:04.300 "small_pool_count": 8192, 00:06:04.300 "large_pool_count": 1024, 00:06:04.300 "small_bufsize": 8192, 00:06:04.300 "large_bufsize": 135168 00:06:04.300 } 00:06:04.300 } 00:06:04.300 ] 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "subsystem": "sock", 00:06:04.300 "config": [ 00:06:04.300 { 00:06:04.300 "method": "sock_set_default_impl", 00:06:04.300 "params": { 00:06:04.300 "impl_name": "posix" 00:06:04.300 } 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "method": "sock_impl_set_options", 00:06:04.300 "params": { 00:06:04.300 "impl_name": "ssl", 00:06:04.300 "recv_buf_size": 4096, 00:06:04.300 "send_buf_size": 4096, 00:06:04.300 "enable_recv_pipe": true, 00:06:04.300 "enable_quickack": false, 00:06:04.300 "enable_placement_id": 0, 00:06:04.300 "enable_zerocopy_send_server": true, 00:06:04.300 "enable_zerocopy_send_client": false, 00:06:04.300 "zerocopy_threshold": 0, 00:06:04.300 "tls_version": 0, 00:06:04.300 "enable_ktls": false 00:06:04.300 } 00:06:04.300 }, 00:06:04.300 { 00:06:04.300 "method": "sock_impl_set_options", 00:06:04.300 "params": { 00:06:04.300 "impl_name": "posix", 00:06:04.301 "recv_buf_size": 2097152, 00:06:04.301 "send_buf_size": 2097152, 00:06:04.301 "enable_recv_pipe": true, 00:06:04.301 "enable_quickack": false, 00:06:04.301 "enable_placement_id": 0, 00:06:04.301 "enable_zerocopy_send_server": true, 00:06:04.301 "enable_zerocopy_send_client": false, 00:06:04.301 "zerocopy_threshold": 0, 00:06:04.301 "tls_version": 0, 00:06:04.301 "enable_ktls": false 00:06:04.301 } 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "vmd", 00:06:04.301 "config": [] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "accel", 00:06:04.301 "config": [ 00:06:04.301 { 00:06:04.301 "method": "accel_set_options", 00:06:04.301 "params": { 00:06:04.301 "small_cache_size": 128, 00:06:04.301 "large_cache_size": 16, 00:06:04.301 "task_count": 2048, 00:06:04.301 "sequence_count": 2048, 00:06:04.301 "buf_count": 2048 00:06:04.301 } 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "bdev", 00:06:04.301 "config": [ 00:06:04.301 { 00:06:04.301 "method": "bdev_set_options", 00:06:04.301 "params": { 00:06:04.301 "bdev_io_pool_size": 65535, 00:06:04.301 "bdev_io_cache_size": 256, 00:06:04.301 "bdev_auto_examine": true, 00:06:04.301 "iobuf_small_cache_size": 128, 00:06:04.301 "iobuf_large_cache_size": 16 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "bdev_raid_set_options", 00:06:04.301 "params": { 00:06:04.301 "process_window_size_kb": 1024, 00:06:04.301 "process_max_bandwidth_mb_sec": 0 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "bdev_iscsi_set_options", 00:06:04.301 "params": { 00:06:04.301 "timeout_sec": 30 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "bdev_nvme_set_options", 00:06:04.301 "params": { 00:06:04.301 "action_on_timeout": "none", 00:06:04.301 "timeout_us": 0, 00:06:04.301 "timeout_admin_us": 0, 00:06:04.301 "keep_alive_timeout_ms": 10000, 00:06:04.301 "arbitration_burst": 0, 00:06:04.301 "low_priority_weight": 0, 00:06:04.301 "medium_priority_weight": 0, 00:06:04.301 "high_priority_weight": 0, 00:06:04.301 "nvme_adminq_poll_period_us": 10000, 00:06:04.301 "nvme_ioq_poll_period_us": 0, 00:06:04.301 "io_queue_requests": 0, 00:06:04.301 "delay_cmd_submit": true, 00:06:04.301 "transport_retry_count": 4, 00:06:04.301 "bdev_retry_count": 3, 00:06:04.301 "transport_ack_timeout": 0, 00:06:04.301 "ctrlr_loss_timeout_sec": 0, 00:06:04.301 "reconnect_delay_sec": 0, 00:06:04.301 "fast_io_fail_timeout_sec": 0, 00:06:04.301 "disable_auto_failback": false, 00:06:04.301 "generate_uuids": false, 00:06:04.301 "transport_tos": 0, 00:06:04.301 "nvme_error_stat": false, 00:06:04.301 "rdma_srq_size": 0, 00:06:04.301 "io_path_stat": false, 00:06:04.301 "allow_accel_sequence": false, 00:06:04.301 "rdma_max_cq_size": 0, 00:06:04.301 "rdma_cm_event_timeout_ms": 0, 00:06:04.301 "dhchap_digests": [ 00:06:04.301 "sha256", 00:06:04.301 "sha384", 00:06:04.301 "sha512" 00:06:04.301 ], 00:06:04.301 "dhchap_dhgroups": [ 00:06:04.301 "null", 00:06:04.301 "ffdhe2048", 00:06:04.301 "ffdhe3072", 00:06:04.301 "ffdhe4096", 00:06:04.301 "ffdhe6144", 00:06:04.301 "ffdhe8192" 00:06:04.301 ] 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "bdev_nvme_set_hotplug", 00:06:04.301 "params": { 00:06:04.301 "period_us": 100000, 00:06:04.301 "enable": false 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "bdev_wait_for_examine" 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "scsi", 00:06:04.301 "config": null 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "scheduler", 00:06:04.301 "config": [ 00:06:04.301 { 00:06:04.301 "method": "framework_set_scheduler", 00:06:04.301 "params": { 00:06:04.301 "name": "static" 00:06:04.301 } 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "vhost_scsi", 00:06:04.301 "config": [] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "vhost_blk", 00:06:04.301 "config": [] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "ublk", 00:06:04.301 "config": [] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "nbd", 00:06:04.301 "config": [] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "nvmf", 00:06:04.301 "config": [ 00:06:04.301 { 00:06:04.301 "method": "nvmf_set_config", 00:06:04.301 "params": { 00:06:04.301 "discovery_filter": "match_any", 00:06:04.301 "admin_cmd_passthru": { 00:06:04.301 "identify_ctrlr": false 00:06:04.301 }, 00:06:04.301 "dhchap_digests": [ 00:06:04.301 "sha256", 00:06:04.301 "sha384", 00:06:04.301 "sha512" 00:06:04.301 ], 00:06:04.301 "dhchap_dhgroups": [ 00:06:04.301 "null", 00:06:04.301 "ffdhe2048", 00:06:04.301 "ffdhe3072", 00:06:04.301 "ffdhe4096", 00:06:04.301 "ffdhe6144", 00:06:04.301 "ffdhe8192" 00:06:04.301 ] 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "nvmf_set_max_subsystems", 00:06:04.301 "params": { 00:06:04.301 "max_subsystems": 1024 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "nvmf_set_crdt", 00:06:04.301 "params": { 00:06:04.301 "crdt1": 0, 00:06:04.301 "crdt2": 0, 00:06:04.301 "crdt3": 0 00:06:04.301 } 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "method": "nvmf_create_transport", 00:06:04.301 "params": { 00:06:04.301 "trtype": "TCP", 00:06:04.301 "max_queue_depth": 128, 00:06:04.301 "max_io_qpairs_per_ctrlr": 127, 00:06:04.301 "in_capsule_data_size": 4096, 00:06:04.301 "max_io_size": 131072, 00:06:04.301 "io_unit_size": 131072, 00:06:04.301 "max_aq_depth": 128, 00:06:04.301 "num_shared_buffers": 511, 00:06:04.301 "buf_cache_size": 4294967295, 00:06:04.301 "dif_insert_or_strip": false, 00:06:04.301 "zcopy": false, 00:06:04.301 "c2h_success": true, 00:06:04.301 "sock_priority": 0, 00:06:04.301 "abort_timeout_sec": 1, 00:06:04.301 "ack_timeout": 0, 00:06:04.301 "data_wr_pool_size": 0 00:06:04.301 } 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "subsystem": "iscsi", 00:06:04.301 "config": [ 00:06:04.301 { 00:06:04.301 "method": "iscsi_set_options", 00:06:04.301 "params": { 00:06:04.301 "node_base": "iqn.2016-06.io.spdk", 00:06:04.301 "max_sessions": 128, 00:06:04.301 "max_connections_per_session": 2, 00:06:04.301 "max_queue_depth": 64, 00:06:04.301 "default_time2wait": 2, 00:06:04.301 "default_time2retain": 20, 00:06:04.301 "first_burst_length": 8192, 00:06:04.301 "immediate_data": true, 00:06:04.301 "allow_duplicated_isid": false, 00:06:04.301 "error_recovery_level": 0, 00:06:04.301 "nop_timeout": 60, 00:06:04.301 "nop_in_interval": 30, 00:06:04.301 "disable_chap": false, 00:06:04.301 "require_chap": false, 00:06:04.301 "mutual_chap": false, 00:06:04.301 "chap_group": 0, 00:06:04.301 "max_large_datain_per_connection": 64, 00:06:04.301 "max_r2t_per_connection": 4, 00:06:04.301 "pdu_pool_size": 36864, 00:06:04.301 "immediate_data_pool_size": 16384, 00:06:04.301 "data_out_pool_size": 2048 00:06:04.301 } 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 } 00:06:04.301 ] 00:06:04.301 } 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1234306 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1234306 ']' 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1234306 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234306 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234306' 00:06:04.301 killing process with pid 1234306 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1234306 00:06:04.301 14:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1234306 00:06:04.560 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1234450 00:06:04.560 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.560 14:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1234450 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1234450 ']' 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1234450 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234450 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234450' 00:06:09.821 killing process with pid 1234450 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1234450 00:06:09.821 14:22:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1234450 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.079 00:06:10.079 real 0m6.616s 00:06:10.079 user 0m6.202s 00:06:10.079 sys 0m0.741s 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.079 ************************************ 00:06:10.079 END TEST skip_rpc_with_json 00:06:10.079 ************************************ 00:06:10.079 14:22:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.079 14:22:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.079 14:22:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.079 14:22:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.079 ************************************ 00:06:10.079 START TEST skip_rpc_with_delay 00:06:10.079 ************************************ 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.079 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.337 [2024-11-02 14:22:02.189122] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.337 [2024-11-02 14:22:02.189251] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.337 00:06:10.337 real 0m0.073s 00:06:10.337 user 0m0.048s 00:06:10.337 sys 0m0.025s 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.337 14:22:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.337 ************************************ 00:06:10.337 END TEST skip_rpc_with_delay 00:06:10.337 ************************************ 00:06:10.337 14:22:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.337 14:22:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.337 14:22:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.337 14:22:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.337 14:22:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.337 14:22:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.337 ************************************ 00:06:10.337 START TEST exit_on_failed_rpc_init 00:06:10.337 ************************************ 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1235169 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1235169 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1235169 ']' 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.337 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.337 [2024-11-02 14:22:02.313198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.337 [2024-11-02 14:22:02.313320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235169 ] 00:06:10.337 [2024-11-02 14:22:02.371672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.596 [2024-11-02 14:22:02.459970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.854 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.854 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:10.854 14:22:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.854 14:22:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.855 14:22:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.855 [2024-11-02 14:22:02.792497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.855 [2024-11-02 14:22:02.792610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235291 ] 00:06:10.855 [2024-11-02 14:22:02.855493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.112 [2024-11-02 14:22:02.950683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.112 [2024-11-02 14:22:02.950822] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.112 [2024-11-02 14:22:02.950845] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.112 [2024-11-02 14:22:02.950859] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1235169 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1235169 ']' 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1235169 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1235169 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1235169' 00:06:11.112 killing process with pid 1235169 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1235169 00:06:11.112 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1235169 00:06:11.679 00:06:11.679 real 0m1.266s 00:06:11.679 user 0m1.388s 00:06:11.679 sys 0m0.488s 00:06:11.679 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.679 14:22:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.679 ************************************ 00:06:11.679 END TEST exit_on_failed_rpc_init 00:06:11.679 ************************************ 00:06:11.679 14:22:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:11.679 00:06:11.679 real 0m13.780s 00:06:11.679 user 0m12.958s 00:06:11.679 sys 0m1.792s 00:06:11.679 14:22:03 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.679 14:22:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.679 ************************************ 00:06:11.679 END TEST skip_rpc 00:06:11.679 ************************************ 00:06:11.679 14:22:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.679 14:22:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.679 14:22:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.679 14:22:03 -- common/autotest_common.sh@10 -- # set +x 00:06:11.679 ************************************ 00:06:11.679 START TEST rpc_client 00:06:11.679 ************************************ 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.679 * Looking for test storage... 00:06:11.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.679 14:22:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.679 --rc genhtml_branch_coverage=1 00:06:11.679 --rc genhtml_function_coverage=1 00:06:11.679 --rc genhtml_legend=1 00:06:11.679 --rc geninfo_all_blocks=1 00:06:11.679 --rc geninfo_unexecuted_blocks=1 00:06:11.679 00:06:11.679 ' 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.679 --rc genhtml_branch_coverage=1 00:06:11.679 --rc genhtml_function_coverage=1 00:06:11.679 --rc genhtml_legend=1 00:06:11.679 --rc geninfo_all_blocks=1 00:06:11.679 --rc geninfo_unexecuted_blocks=1 00:06:11.679 00:06:11.679 ' 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.679 --rc genhtml_branch_coverage=1 00:06:11.679 --rc genhtml_function_coverage=1 00:06:11.679 --rc genhtml_legend=1 00:06:11.679 --rc geninfo_all_blocks=1 00:06:11.679 --rc geninfo_unexecuted_blocks=1 00:06:11.679 00:06:11.679 ' 00:06:11.679 14:22:03 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.679 --rc genhtml_branch_coverage=1 00:06:11.679 --rc genhtml_function_coverage=1 00:06:11.679 --rc genhtml_legend=1 00:06:11.679 --rc geninfo_all_blocks=1 00:06:11.679 --rc geninfo_unexecuted_blocks=1 00:06:11.679 00:06:11.679 ' 00:06:11.679 14:22:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:11.938 OK 00:06:11.938 14:22:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.938 00:06:11.938 real 0m0.149s 00:06:11.938 user 0m0.098s 00:06:11.938 sys 0m0.058s 00:06:11.938 14:22:03 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.938 14:22:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:11.938 ************************************ 00:06:11.938 END TEST rpc_client 00:06:11.938 ************************************ 00:06:11.938 14:22:03 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.938 14:22:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.938 14:22:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.938 14:22:03 -- common/autotest_common.sh@10 -- # set +x 00:06:11.938 ************************************ 00:06:11.938 START TEST json_config 00:06:11.938 ************************************ 00:06:11.938 14:22:03 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.938 14:22:03 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.938 14:22:03 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.938 14:22:03 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.938 14:22:03 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.938 14:22:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.938 14:22:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.938 14:22:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.938 14:22:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.938 14:22:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.939 14:22:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.939 14:22:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.939 14:22:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:11.939 14:22:03 json_config -- scripts/common.sh@345 -- # : 1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.939 14:22:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.939 14:22:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@353 -- # local d=1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.939 14:22:03 json_config -- scripts/common.sh@355 -- # echo 1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.939 14:22:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@353 -- # local d=2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.939 14:22:03 json_config -- scripts/common.sh@355 -- # echo 2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.939 14:22:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.939 14:22:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.939 14:22:03 json_config -- scripts/common.sh@368 -- # return 0 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.939 --rc genhtml_branch_coverage=1 00:06:11.939 --rc genhtml_function_coverage=1 00:06:11.939 --rc genhtml_legend=1 00:06:11.939 --rc geninfo_all_blocks=1 00:06:11.939 --rc geninfo_unexecuted_blocks=1 00:06:11.939 00:06:11.939 ' 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.939 --rc genhtml_branch_coverage=1 00:06:11.939 --rc genhtml_function_coverage=1 00:06:11.939 --rc genhtml_legend=1 00:06:11.939 --rc geninfo_all_blocks=1 00:06:11.939 --rc geninfo_unexecuted_blocks=1 00:06:11.939 00:06:11.939 ' 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.939 --rc genhtml_branch_coverage=1 00:06:11.939 --rc genhtml_function_coverage=1 00:06:11.939 --rc genhtml_legend=1 00:06:11.939 --rc geninfo_all_blocks=1 00:06:11.939 --rc geninfo_unexecuted_blocks=1 00:06:11.939 00:06:11.939 ' 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.939 --rc genhtml_branch_coverage=1 00:06:11.939 --rc genhtml_function_coverage=1 00:06:11.939 --rc genhtml_legend=1 00:06:11.939 --rc geninfo_all_blocks=1 00:06:11.939 --rc geninfo_unexecuted_blocks=1 00:06:11.939 00:06:11.939 ' 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.939 14:22:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.939 14:22:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.939 14:22:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.939 14:22:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.939 14:22:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.939 14:22:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.939 14:22:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.939 14:22:03 json_config -- paths/export.sh@5 -- # export PATH 00:06:11.939 14:22:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@51 -- # : 0 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.939 14:22:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:11.939 INFO: JSON configuration test init 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.939 14:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 14:22:03 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:11.939 14:22:03 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.939 14:22:03 json_config -- json_config/common.sh@10 -- # shift 00:06:11.939 14:22:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.939 14:22:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.939 14:22:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.939 14:22:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.939 14:22:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.939 14:22:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1235559 00:06:11.939 14:22:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:11.940 14:22:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.940 Waiting for target to run... 00:06:11.940 14:22:03 json_config -- json_config/common.sh@25 -- # waitforlisten 1235559 /var/tmp/spdk_tgt.sock 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@831 -- # '[' -z 1235559 ']' 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.940 14:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.197 [2024-11-02 14:22:03.999394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:12.197 [2024-11-02 14:22:03.999502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235559 ] 00:06:12.763 [2024-11-02 14:22:04.513169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.763 [2024-11-02 14:22:04.595459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:13.021 14:22:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:13.021 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.021 14:22:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:13.021 14:22:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:13.021 14:22:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.305 14:22:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.305 14:22:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:16.305 14:22:08 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:16.305 14:22:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@54 -- # sort 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:16.563 14:22:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.563 14:22:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:16.563 14:22:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.563 14:22:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:16.563 14:22:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.563 14:22:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.821 MallocForNvmf0 00:06:16.821 14:22:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:16.821 14:22:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.078 MallocForNvmf1 00:06:17.079 14:22:09 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.079 14:22:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.336 [2024-11-02 14:22:09.260988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.336 14:22:09 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.336 14:22:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.594 14:22:09 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.594 14:22:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.851 14:22:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:17.851 14:22:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.108 14:22:10 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.108 14:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.366 [2024-11-02 14:22:10.344551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.366 14:22:10 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:18.366 14:22:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.366 14:22:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.366 14:22:10 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:18.366 14:22:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.366 14:22:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.366 14:22:10 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:18.366 14:22:10 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.366 14:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.624 MallocBdevForConfigChangeCheck 00:06:18.624 14:22:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:18.624 14:22:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.624 14:22:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 14:22:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:18.881 14:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.139 14:22:11 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:19.139 INFO: shutting down applications... 00:06:19.139 14:22:11 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:19.139 14:22:11 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:19.139 14:22:11 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:19.139 14:22:11 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.037 Calling clear_iscsi_subsystem 00:06:21.037 Calling clear_nvmf_subsystem 00:06:21.037 Calling clear_nbd_subsystem 00:06:21.037 Calling clear_ublk_subsystem 00:06:21.037 Calling clear_vhost_blk_subsystem 00:06:21.037 Calling clear_vhost_scsi_subsystem 00:06:21.037 Calling clear_bdev_subsystem 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.037 14:22:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.295 14:22:13 json_config -- json_config/json_config.sh@352 -- # break 00:06:21.295 14:22:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:21.295 14:22:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:21.295 14:22:13 json_config -- json_config/common.sh@31 -- # local app=target 00:06:21.295 14:22:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.295 14:22:13 json_config -- json_config/common.sh@35 -- # [[ -n 1235559 ]] 00:06:21.295 14:22:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1235559 00:06:21.295 14:22:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.295 14:22:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.295 14:22:13 json_config -- json_config/common.sh@41 -- # kill -0 1235559 00:06:21.295 14:22:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.861 14:22:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.861 14:22:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.861 14:22:13 json_config -- json_config/common.sh@41 -- # kill -0 1235559 00:06:21.861 14:22:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.861 14:22:13 json_config -- json_config/common.sh@43 -- # break 00:06:21.861 14:22:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.861 14:22:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.861 SPDK target shutdown done 00:06:21.861 14:22:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:21.861 INFO: relaunching applications... 00:06:21.861 14:22:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.861 14:22:13 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.861 14:22:13 json_config -- json_config/common.sh@10 -- # shift 00:06:21.861 14:22:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.861 14:22:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.861 14:22:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.861 14:22:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.861 14:22:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.861 14:22:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1236835 00:06:21.861 14:22:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.861 Waiting for target to run... 00:06:21.861 14:22:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.861 14:22:13 json_config -- json_config/common.sh@25 -- # waitforlisten 1236835 /var/tmp/spdk_tgt.sock 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@831 -- # '[' -z 1236835 ']' 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.861 14:22:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.861 [2024-11-02 14:22:13.752944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.861 [2024-11-02 14:22:13.753036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236835 ] 00:06:22.119 [2024-11-02 14:22:14.111542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.378 [2024-11-02 14:22:14.175800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.658 [2024-11-02 14:22:17.221756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.658 [2024-11-02 14:22:17.254241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:25.658 14:22:17 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.658 14:22:17 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:25.658 14:22:17 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.658 00:06:25.659 14:22:17 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:25.659 14:22:17 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:25.659 INFO: Checking if target configuration is the same... 00:06:25.659 14:22:17 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.659 14:22:17 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:25.659 14:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:25.659 + '[' 2 -ne 2 ']' 00:06:25.659 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:25.659 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:25.659 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:25.659 +++ basename /dev/fd/62 00:06:25.659 ++ mktemp /tmp/62.XXX 00:06:25.659 + tmp_file_1=/tmp/62.79C 00:06:25.659 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.659 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:25.659 + tmp_file_2=/tmp/spdk_tgt_config.json.Er2 00:06:25.659 + ret=0 00:06:25.659 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.659 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.917 + diff -u /tmp/62.79C /tmp/spdk_tgt_config.json.Er2 00:06:25.917 + echo 'INFO: JSON config files are the same' 00:06:25.917 INFO: JSON config files are the same 00:06:25.917 + rm /tmp/62.79C /tmp/spdk_tgt_config.json.Er2 00:06:25.917 + exit 0 00:06:25.917 14:22:17 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:25.917 14:22:17 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:25.917 INFO: changing configuration and checking if this can be detected... 00:06:25.917 14:22:17 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:25.917 14:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.174 14:22:18 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.174 14:22:18 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:26.174 14:22:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.174 + '[' 2 -ne 2 ']' 00:06:26.174 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.174 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.174 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.174 +++ basename /dev/fd/62 00:06:26.174 ++ mktemp /tmp/62.XXX 00:06:26.174 + tmp_file_1=/tmp/62.PkG 00:06:26.174 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.174 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.174 + tmp_file_2=/tmp/spdk_tgt_config.json.Xwg 00:06:26.174 + ret=0 00:06:26.174 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.432 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.432 + diff -u /tmp/62.PkG /tmp/spdk_tgt_config.json.Xwg 00:06:26.432 + ret=1 00:06:26.432 + echo '=== Start of file: /tmp/62.PkG ===' 00:06:26.432 + cat /tmp/62.PkG 00:06:26.432 + echo '=== End of file: /tmp/62.PkG ===' 00:06:26.432 + echo '' 00:06:26.432 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Xwg ===' 00:06:26.432 + cat /tmp/spdk_tgt_config.json.Xwg 00:06:26.690 + echo '=== End of file: /tmp/spdk_tgt_config.json.Xwg ===' 00:06:26.690 + echo '' 00:06:26.690 + rm /tmp/62.PkG /tmp/spdk_tgt_config.json.Xwg 00:06:26.690 + exit 1 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:26.690 INFO: configuration change detected. 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@324 -- # [[ -n 1236835 ]] 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 14:22:18 json_config -- json_config/json_config.sh@330 -- # killprocess 1236835 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@950 -- # '[' -z 1236835 ']' 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@954 -- # kill -0 1236835 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@955 -- # uname 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236835 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236835' 00:06:26.690 killing process with pid 1236835 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@969 -- # kill 1236835 00:06:26.690 14:22:18 json_config -- common/autotest_common.sh@974 -- # wait 1236835 00:06:28.588 14:22:20 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.588 14:22:20 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:28.588 14:22:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.588 14:22:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.588 14:22:20 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:28.588 14:22:20 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:28.588 INFO: Success 00:06:28.588 00:06:28.588 real 0m16.394s 00:06:28.589 user 0m18.547s 00:06:28.589 sys 0m2.077s 00:06:28.589 14:22:20 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.589 14:22:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.589 ************************************ 00:06:28.589 END TEST json_config 00:06:28.589 ************************************ 00:06:28.589 14:22:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:28.589 14:22:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.589 14:22:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.589 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:06:28.589 ************************************ 00:06:28.589 START TEST json_config_extra_key 00:06:28.589 ************************************ 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.589 --rc genhtml_branch_coverage=1 00:06:28.589 --rc genhtml_function_coverage=1 00:06:28.589 --rc genhtml_legend=1 00:06:28.589 --rc geninfo_all_blocks=1 00:06:28.589 --rc geninfo_unexecuted_blocks=1 00:06:28.589 00:06:28.589 ' 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.589 --rc genhtml_branch_coverage=1 00:06:28.589 --rc genhtml_function_coverage=1 00:06:28.589 --rc genhtml_legend=1 00:06:28.589 --rc geninfo_all_blocks=1 00:06:28.589 --rc geninfo_unexecuted_blocks=1 00:06:28.589 00:06:28.589 ' 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.589 --rc genhtml_branch_coverage=1 00:06:28.589 --rc genhtml_function_coverage=1 00:06:28.589 --rc genhtml_legend=1 00:06:28.589 --rc geninfo_all_blocks=1 00:06:28.589 --rc geninfo_unexecuted_blocks=1 00:06:28.589 00:06:28.589 ' 00:06:28.589 14:22:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.589 --rc genhtml_branch_coverage=1 00:06:28.589 --rc genhtml_function_coverage=1 00:06:28.589 --rc genhtml_legend=1 00:06:28.589 --rc geninfo_all_blocks=1 00:06:28.589 --rc geninfo_unexecuted_blocks=1 00:06:28.589 00:06:28.589 ' 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.589 14:22:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.589 14:22:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.589 14:22:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.589 14:22:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.589 14:22:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:28.589 14:22:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.589 14:22:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:28.589 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:28.590 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:28.590 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:28.590 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:28.590 INFO: launching applications... 00:06:28.590 14:22:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1237684 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.590 Waiting for target to run... 00:06:28.590 14:22:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1237684 /var/tmp/spdk_tgt.sock 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1237684 ']' 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.590 14:22:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:28.590 [2024-11-02 14:22:20.450802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:28.590 [2024-11-02 14:22:20.450895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237684 ] 00:06:29.155 [2024-11-02 14:22:20.957712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.156 [2024-11-02 14:22:21.035175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.414 14:22:21 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.414 14:22:21 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:29.414 00:06:29.414 14:22:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:29.414 INFO: shutting down applications... 00:06:29.414 14:22:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1237684 ]] 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1237684 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1237684 00:06:29.414 14:22:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1237684 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.980 14:22:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.980 SPDK target shutdown done 00:06:29.980 14:22:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:29.980 Success 00:06:29.980 00:06:29.980 real 0m1.709s 00:06:29.980 user 0m1.534s 00:06:29.980 sys 0m0.642s 00:06:29.980 14:22:21 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.980 14:22:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.980 ************************************ 00:06:29.980 END TEST json_config_extra_key 00:06:29.980 ************************************ 00:06:29.980 14:22:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.980 14:22:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.980 14:22:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.980 14:22:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.980 ************************************ 00:06:29.980 START TEST alias_rpc 00:06:29.981 ************************************ 00:06:29.981 14:22:21 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.242 * Looking for test storage... 00:06:30.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.242 14:22:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.242 --rc genhtml_branch_coverage=1 00:06:30.242 --rc genhtml_function_coverage=1 00:06:30.242 --rc genhtml_legend=1 00:06:30.242 --rc geninfo_all_blocks=1 00:06:30.242 --rc geninfo_unexecuted_blocks=1 00:06:30.242 00:06:30.242 ' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.242 --rc genhtml_branch_coverage=1 00:06:30.242 --rc genhtml_function_coverage=1 00:06:30.242 --rc genhtml_legend=1 00:06:30.242 --rc geninfo_all_blocks=1 00:06:30.242 --rc geninfo_unexecuted_blocks=1 00:06:30.242 00:06:30.242 ' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.242 --rc genhtml_branch_coverage=1 00:06:30.242 --rc genhtml_function_coverage=1 00:06:30.242 --rc genhtml_legend=1 00:06:30.242 --rc geninfo_all_blocks=1 00:06:30.242 --rc geninfo_unexecuted_blocks=1 00:06:30.242 00:06:30.242 ' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.242 --rc genhtml_branch_coverage=1 00:06:30.242 --rc genhtml_function_coverage=1 00:06:30.242 --rc genhtml_legend=1 00:06:30.242 --rc geninfo_all_blocks=1 00:06:30.242 --rc geninfo_unexecuted_blocks=1 00:06:30.242 00:06:30.242 ' 00:06:30.242 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.242 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1237995 00:06:30.242 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.242 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1237995 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1237995 ']' 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.242 14:22:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.242 [2024-11-02 14:22:22.192370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:30.242 [2024-11-02 14:22:22.192483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237995 ] 00:06:30.242 [2024-11-02 14:22:22.269802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.572 [2024-11-02 14:22:22.370734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.854 14:22:22 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.854 14:22:22 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.854 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:31.113 14:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1237995 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1237995 ']' 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1237995 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237995 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237995' 00:06:31.113 killing process with pid 1237995 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@969 -- # kill 1237995 00:06:31.113 14:22:22 alias_rpc -- common/autotest_common.sh@974 -- # wait 1237995 00:06:31.371 00:06:31.371 real 0m1.419s 00:06:31.371 user 0m1.498s 00:06:31.371 sys 0m0.485s 00:06:31.371 14:22:23 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.371 14:22:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.371 ************************************ 00:06:31.371 END TEST alias_rpc 00:06:31.371 ************************************ 00:06:31.630 14:22:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:31.630 14:22:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:31.630 14:22:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.630 14:22:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.630 14:22:23 -- common/autotest_common.sh@10 -- # set +x 00:06:31.630 ************************************ 00:06:31.630 START TEST spdkcli_tcp 00:06:31.630 ************************************ 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:31.630 * Looking for test storage... 00:06:31.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.630 14:22:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.630 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.630 --rc genhtml_branch_coverage=1 00:06:31.630 --rc genhtml_function_coverage=1 00:06:31.630 --rc genhtml_legend=1 00:06:31.630 --rc geninfo_all_blocks=1 00:06:31.630 --rc geninfo_unexecuted_blocks=1 00:06:31.630 00:06:31.630 ' 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.631 --rc genhtml_branch_coverage=1 00:06:31.631 --rc genhtml_function_coverage=1 00:06:31.631 --rc genhtml_legend=1 00:06:31.631 --rc geninfo_all_blocks=1 00:06:31.631 --rc geninfo_unexecuted_blocks=1 00:06:31.631 00:06:31.631 ' 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.631 --rc genhtml_branch_coverage=1 00:06:31.631 --rc genhtml_function_coverage=1 00:06:31.631 --rc genhtml_legend=1 00:06:31.631 --rc geninfo_all_blocks=1 00:06:31.631 --rc geninfo_unexecuted_blocks=1 00:06:31.631 00:06:31.631 ' 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.631 --rc genhtml_branch_coverage=1 00:06:31.631 --rc genhtml_function_coverage=1 00:06:31.631 --rc genhtml_legend=1 00:06:31.631 --rc geninfo_all_blocks=1 00:06:31.631 --rc geninfo_unexecuted_blocks=1 00:06:31.631 00:06:31.631 ' 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1238201 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:31.631 14:22:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1238201 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1238201 ']' 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.631 14:22:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.631 [2024-11-02 14:22:23.668090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.631 [2024-11-02 14:22:23.668175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238201 ] 00:06:31.889 [2024-11-02 14:22:23.726693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.889 [2024-11-02 14:22:23.815532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.889 [2024-11-02 14:22:23.815536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.147 14:22:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.147 14:22:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:32.147 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1238325 00:06:32.147 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.147 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.405 [ 00:06:32.405 "bdev_malloc_delete", 00:06:32.405 "bdev_malloc_create", 00:06:32.405 "bdev_null_resize", 00:06:32.405 "bdev_null_delete", 00:06:32.405 "bdev_null_create", 00:06:32.405 "bdev_nvme_cuse_unregister", 00:06:32.405 "bdev_nvme_cuse_register", 00:06:32.405 "bdev_opal_new_user", 00:06:32.405 "bdev_opal_set_lock_state", 00:06:32.405 "bdev_opal_delete", 00:06:32.405 "bdev_opal_get_info", 00:06:32.405 "bdev_opal_create", 00:06:32.405 "bdev_nvme_opal_revert", 00:06:32.405 "bdev_nvme_opal_init", 00:06:32.405 "bdev_nvme_send_cmd", 00:06:32.405 "bdev_nvme_set_keys", 00:06:32.405 "bdev_nvme_get_path_iostat", 00:06:32.405 "bdev_nvme_get_mdns_discovery_info", 00:06:32.405 "bdev_nvme_stop_mdns_discovery", 00:06:32.406 "bdev_nvme_start_mdns_discovery", 00:06:32.406 "bdev_nvme_set_multipath_policy", 00:06:32.406 "bdev_nvme_set_preferred_path", 00:06:32.406 "bdev_nvme_get_io_paths", 00:06:32.406 "bdev_nvme_remove_error_injection", 00:06:32.406 "bdev_nvme_add_error_injection", 00:06:32.406 "bdev_nvme_get_discovery_info", 00:06:32.406 "bdev_nvme_stop_discovery", 00:06:32.406 "bdev_nvme_start_discovery", 00:06:32.406 "bdev_nvme_get_controller_health_info", 00:06:32.406 "bdev_nvme_disable_controller", 00:06:32.406 "bdev_nvme_enable_controller", 00:06:32.406 "bdev_nvme_reset_controller", 00:06:32.406 "bdev_nvme_get_transport_statistics", 00:06:32.406 "bdev_nvme_apply_firmware", 00:06:32.406 "bdev_nvme_detach_controller", 00:06:32.406 "bdev_nvme_get_controllers", 00:06:32.406 "bdev_nvme_attach_controller", 00:06:32.406 "bdev_nvme_set_hotplug", 00:06:32.406 "bdev_nvme_set_options", 00:06:32.406 "bdev_passthru_delete", 00:06:32.406 "bdev_passthru_create", 00:06:32.406 "bdev_lvol_set_parent_bdev", 00:06:32.406 "bdev_lvol_set_parent", 00:06:32.406 "bdev_lvol_check_shallow_copy", 00:06:32.406 "bdev_lvol_start_shallow_copy", 00:06:32.406 "bdev_lvol_grow_lvstore", 00:06:32.406 "bdev_lvol_get_lvols", 00:06:32.406 "bdev_lvol_get_lvstores", 00:06:32.406 "bdev_lvol_delete", 00:06:32.406 "bdev_lvol_set_read_only", 00:06:32.406 "bdev_lvol_resize", 00:06:32.406 "bdev_lvol_decouple_parent", 00:06:32.406 "bdev_lvol_inflate", 00:06:32.406 "bdev_lvol_rename", 00:06:32.406 "bdev_lvol_clone_bdev", 00:06:32.406 "bdev_lvol_clone", 00:06:32.406 "bdev_lvol_snapshot", 00:06:32.406 "bdev_lvol_create", 00:06:32.406 "bdev_lvol_delete_lvstore", 00:06:32.406 "bdev_lvol_rename_lvstore", 00:06:32.406 "bdev_lvol_create_lvstore", 00:06:32.406 "bdev_raid_set_options", 00:06:32.406 "bdev_raid_remove_base_bdev", 00:06:32.406 "bdev_raid_add_base_bdev", 00:06:32.406 "bdev_raid_delete", 00:06:32.406 "bdev_raid_create", 00:06:32.406 "bdev_raid_get_bdevs", 00:06:32.406 "bdev_error_inject_error", 00:06:32.406 "bdev_error_delete", 00:06:32.406 "bdev_error_create", 00:06:32.406 "bdev_split_delete", 00:06:32.406 "bdev_split_create", 00:06:32.406 "bdev_delay_delete", 00:06:32.406 "bdev_delay_create", 00:06:32.406 "bdev_delay_update_latency", 00:06:32.406 "bdev_zone_block_delete", 00:06:32.406 "bdev_zone_block_create", 00:06:32.406 "blobfs_create", 00:06:32.406 "blobfs_detect", 00:06:32.406 "blobfs_set_cache_size", 00:06:32.406 "bdev_aio_delete", 00:06:32.406 "bdev_aio_rescan", 00:06:32.406 "bdev_aio_create", 00:06:32.406 "bdev_ftl_set_property", 00:06:32.406 "bdev_ftl_get_properties", 00:06:32.406 "bdev_ftl_get_stats", 00:06:32.406 "bdev_ftl_unmap", 00:06:32.406 "bdev_ftl_unload", 00:06:32.406 "bdev_ftl_delete", 00:06:32.406 "bdev_ftl_load", 00:06:32.406 "bdev_ftl_create", 00:06:32.406 "bdev_virtio_attach_controller", 00:06:32.406 "bdev_virtio_scsi_get_devices", 00:06:32.406 "bdev_virtio_detach_controller", 00:06:32.406 "bdev_virtio_blk_set_hotplug", 00:06:32.406 "bdev_iscsi_delete", 00:06:32.406 "bdev_iscsi_create", 00:06:32.406 "bdev_iscsi_set_options", 00:06:32.406 "accel_error_inject_error", 00:06:32.406 "ioat_scan_accel_module", 00:06:32.406 "dsa_scan_accel_module", 00:06:32.406 "iaa_scan_accel_module", 00:06:32.406 "vfu_virtio_create_fs_endpoint", 00:06:32.406 "vfu_virtio_create_scsi_endpoint", 00:06:32.406 "vfu_virtio_scsi_remove_target", 00:06:32.406 "vfu_virtio_scsi_add_target", 00:06:32.406 "vfu_virtio_create_blk_endpoint", 00:06:32.406 "vfu_virtio_delete_endpoint", 00:06:32.406 "keyring_file_remove_key", 00:06:32.406 "keyring_file_add_key", 00:06:32.406 "keyring_linux_set_options", 00:06:32.406 "fsdev_aio_delete", 00:06:32.406 "fsdev_aio_create", 00:06:32.406 "iscsi_get_histogram", 00:06:32.406 "iscsi_enable_histogram", 00:06:32.406 "iscsi_set_options", 00:06:32.406 "iscsi_get_auth_groups", 00:06:32.406 "iscsi_auth_group_remove_secret", 00:06:32.406 "iscsi_auth_group_add_secret", 00:06:32.406 "iscsi_delete_auth_group", 00:06:32.406 "iscsi_create_auth_group", 00:06:32.406 "iscsi_set_discovery_auth", 00:06:32.406 "iscsi_get_options", 00:06:32.406 "iscsi_target_node_request_logout", 00:06:32.406 "iscsi_target_node_set_redirect", 00:06:32.406 "iscsi_target_node_set_auth", 00:06:32.406 "iscsi_target_node_add_lun", 00:06:32.406 "iscsi_get_stats", 00:06:32.406 "iscsi_get_connections", 00:06:32.406 "iscsi_portal_group_set_auth", 00:06:32.406 "iscsi_start_portal_group", 00:06:32.406 "iscsi_delete_portal_group", 00:06:32.406 "iscsi_create_portal_group", 00:06:32.406 "iscsi_get_portal_groups", 00:06:32.406 "iscsi_delete_target_node", 00:06:32.406 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.406 "iscsi_target_node_add_pg_ig_maps", 00:06:32.406 "iscsi_create_target_node", 00:06:32.406 "iscsi_get_target_nodes", 00:06:32.406 "iscsi_delete_initiator_group", 00:06:32.406 "iscsi_initiator_group_remove_initiators", 00:06:32.406 "iscsi_initiator_group_add_initiators", 00:06:32.406 "iscsi_create_initiator_group", 00:06:32.406 "iscsi_get_initiator_groups", 00:06:32.406 "nvmf_set_crdt", 00:06:32.406 "nvmf_set_config", 00:06:32.406 "nvmf_set_max_subsystems", 00:06:32.406 "nvmf_stop_mdns_prr", 00:06:32.406 "nvmf_publish_mdns_prr", 00:06:32.406 "nvmf_subsystem_get_listeners", 00:06:32.406 "nvmf_subsystem_get_qpairs", 00:06:32.406 "nvmf_subsystem_get_controllers", 00:06:32.406 "nvmf_get_stats", 00:06:32.406 "nvmf_get_transports", 00:06:32.406 "nvmf_create_transport", 00:06:32.406 "nvmf_get_targets", 00:06:32.406 "nvmf_delete_target", 00:06:32.406 "nvmf_create_target", 00:06:32.406 "nvmf_subsystem_allow_any_host", 00:06:32.406 "nvmf_subsystem_set_keys", 00:06:32.406 "nvmf_subsystem_remove_host", 00:06:32.406 "nvmf_subsystem_add_host", 00:06:32.406 "nvmf_ns_remove_host", 00:06:32.406 "nvmf_ns_add_host", 00:06:32.406 "nvmf_subsystem_remove_ns", 00:06:32.406 "nvmf_subsystem_set_ns_ana_group", 00:06:32.406 "nvmf_subsystem_add_ns", 00:06:32.406 "nvmf_subsystem_listener_set_ana_state", 00:06:32.406 "nvmf_discovery_get_referrals", 00:06:32.406 "nvmf_discovery_remove_referral", 00:06:32.406 "nvmf_discovery_add_referral", 00:06:32.406 "nvmf_subsystem_remove_listener", 00:06:32.406 "nvmf_subsystem_add_listener", 00:06:32.406 "nvmf_delete_subsystem", 00:06:32.406 "nvmf_create_subsystem", 00:06:32.406 "nvmf_get_subsystems", 00:06:32.406 "env_dpdk_get_mem_stats", 00:06:32.406 "nbd_get_disks", 00:06:32.406 "nbd_stop_disk", 00:06:32.406 "nbd_start_disk", 00:06:32.406 "ublk_recover_disk", 00:06:32.406 "ublk_get_disks", 00:06:32.406 "ublk_stop_disk", 00:06:32.406 "ublk_start_disk", 00:06:32.406 "ublk_destroy_target", 00:06:32.406 "ublk_create_target", 00:06:32.406 "virtio_blk_create_transport", 00:06:32.406 "virtio_blk_get_transports", 00:06:32.406 "vhost_controller_set_coalescing", 00:06:32.406 "vhost_get_controllers", 00:06:32.406 "vhost_delete_controller", 00:06:32.406 "vhost_create_blk_controller", 00:06:32.406 "vhost_scsi_controller_remove_target", 00:06:32.406 "vhost_scsi_controller_add_target", 00:06:32.406 "vhost_start_scsi_controller", 00:06:32.406 "vhost_create_scsi_controller", 00:06:32.406 "thread_set_cpumask", 00:06:32.406 "scheduler_set_options", 00:06:32.406 "framework_get_governor", 00:06:32.406 "framework_get_scheduler", 00:06:32.406 "framework_set_scheduler", 00:06:32.406 "framework_get_reactors", 00:06:32.406 "thread_get_io_channels", 00:06:32.406 "thread_get_pollers", 00:06:32.406 "thread_get_stats", 00:06:32.406 "framework_monitor_context_switch", 00:06:32.406 "spdk_kill_instance", 00:06:32.406 "log_enable_timestamps", 00:06:32.406 "log_get_flags", 00:06:32.406 "log_clear_flag", 00:06:32.406 "log_set_flag", 00:06:32.406 "log_get_level", 00:06:32.406 "log_set_level", 00:06:32.406 "log_get_print_level", 00:06:32.406 "log_set_print_level", 00:06:32.406 "framework_enable_cpumask_locks", 00:06:32.406 "framework_disable_cpumask_locks", 00:06:32.406 "framework_wait_init", 00:06:32.406 "framework_start_init", 00:06:32.406 "scsi_get_devices", 00:06:32.406 "bdev_get_histogram", 00:06:32.406 "bdev_enable_histogram", 00:06:32.406 "bdev_set_qos_limit", 00:06:32.406 "bdev_set_qd_sampling_period", 00:06:32.406 "bdev_get_bdevs", 00:06:32.406 "bdev_reset_iostat", 00:06:32.406 "bdev_get_iostat", 00:06:32.406 "bdev_examine", 00:06:32.406 "bdev_wait_for_examine", 00:06:32.406 "bdev_set_options", 00:06:32.406 "accel_get_stats", 00:06:32.406 "accel_set_options", 00:06:32.406 "accel_set_driver", 00:06:32.406 "accel_crypto_key_destroy", 00:06:32.406 "accel_crypto_keys_get", 00:06:32.406 "accel_crypto_key_create", 00:06:32.406 "accel_assign_opc", 00:06:32.406 "accel_get_module_info", 00:06:32.406 "accel_get_opc_assignments", 00:06:32.406 "vmd_rescan", 00:06:32.406 "vmd_remove_device", 00:06:32.406 "vmd_enable", 00:06:32.406 "sock_get_default_impl", 00:06:32.406 "sock_set_default_impl", 00:06:32.406 "sock_impl_set_options", 00:06:32.406 "sock_impl_get_options", 00:06:32.406 "iobuf_get_stats", 00:06:32.406 "iobuf_set_options", 00:06:32.406 "keyring_get_keys", 00:06:32.406 "vfu_tgt_set_base_path", 00:06:32.406 "framework_get_pci_devices", 00:06:32.406 "framework_get_config", 00:06:32.406 "framework_get_subsystems", 00:06:32.406 "fsdev_set_opts", 00:06:32.406 "fsdev_get_opts", 00:06:32.406 "trace_get_info", 00:06:32.406 "trace_get_tpoint_group_mask", 00:06:32.406 "trace_disable_tpoint_group", 00:06:32.406 "trace_enable_tpoint_group", 00:06:32.406 "trace_clear_tpoint_mask", 00:06:32.406 "trace_set_tpoint_mask", 00:06:32.406 "notify_get_notifications", 00:06:32.407 "notify_get_types", 00:06:32.407 "spdk_get_version", 00:06:32.407 "rpc_get_methods" 00:06:32.407 ] 00:06:32.407 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.407 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.407 14:22:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1238201 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1238201 ']' 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1238201 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238201 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238201' 00:06:32.407 killing process with pid 1238201 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1238201 00:06:32.407 14:22:24 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1238201 00:06:32.973 00:06:32.973 real 0m1.374s 00:06:32.973 user 0m2.412s 00:06:32.973 sys 0m0.492s 00:06:32.973 14:22:24 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.973 14:22:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.973 ************************************ 00:06:32.973 END TEST spdkcli_tcp 00:06:32.973 ************************************ 00:06:32.973 14:22:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.973 14:22:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.973 14:22:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.973 14:22:24 -- common/autotest_common.sh@10 -- # set +x 00:06:32.973 ************************************ 00:06:32.973 START TEST dpdk_mem_utility 00:06:32.973 ************************************ 00:06:32.973 14:22:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.973 * Looking for test storage... 00:06:32.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:32.973 14:22:24 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.973 14:22:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.973 14:22:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.231 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.231 14:22:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.232 14:22:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.232 --rc genhtml_branch_coverage=1 00:06:33.232 --rc genhtml_function_coverage=1 00:06:33.232 --rc genhtml_legend=1 00:06:33.232 --rc geninfo_all_blocks=1 00:06:33.232 --rc geninfo_unexecuted_blocks=1 00:06:33.232 00:06:33.232 ' 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.232 --rc genhtml_branch_coverage=1 00:06:33.232 --rc genhtml_function_coverage=1 00:06:33.232 --rc genhtml_legend=1 00:06:33.232 --rc geninfo_all_blocks=1 00:06:33.232 --rc geninfo_unexecuted_blocks=1 00:06:33.232 00:06:33.232 ' 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.232 --rc genhtml_branch_coverage=1 00:06:33.232 --rc genhtml_function_coverage=1 00:06:33.232 --rc genhtml_legend=1 00:06:33.232 --rc geninfo_all_blocks=1 00:06:33.232 --rc geninfo_unexecuted_blocks=1 00:06:33.232 00:06:33.232 ' 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.232 --rc genhtml_branch_coverage=1 00:06:33.232 --rc genhtml_function_coverage=1 00:06:33.232 --rc genhtml_legend=1 00:06:33.232 --rc geninfo_all_blocks=1 00:06:33.232 --rc geninfo_unexecuted_blocks=1 00:06:33.232 00:06:33.232 ' 00:06:33.232 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.232 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1238530 00:06:33.232 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.232 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1238530 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1238530 ']' 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.232 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.232 [2024-11-02 14:22:25.094417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.232 [2024-11-02 14:22:25.094498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238530 ] 00:06:33.232 [2024-11-02 14:22:25.154611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.232 [2024-11-02 14:22:25.247433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.490 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.490 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:33.490 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.490 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.490 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.490 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.490 { 00:06:33.490 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.490 } 00:06:33.490 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.490 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.749 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:33.749 1 heaps totaling size 860.000000 MiB 00:06:33.749 size: 860.000000 MiB heap id: 0 00:06:33.749 end heaps---------- 00:06:33.749 9 mempools totaling size 642.649841 MiB 00:06:33.749 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.749 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.749 size: 92.545471 MiB name: bdev_io_1238530 00:06:33.749 size: 51.011292 MiB name: evtpool_1238530 00:06:33.749 size: 50.003479 MiB name: msgpool_1238530 00:06:33.749 size: 36.509338 MiB name: fsdev_io_1238530 00:06:33.749 size: 21.763794 MiB name: PDU_Pool 00:06:33.749 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.749 size: 0.026123 MiB name: Session_Pool 00:06:33.749 end mempools------- 00:06:33.749 6 memzones totaling size 4.142822 MiB 00:06:33.749 size: 1.000366 MiB name: RG_ring_0_1238530 00:06:33.749 size: 1.000366 MiB name: RG_ring_1_1238530 00:06:33.749 size: 1.000366 MiB name: RG_ring_4_1238530 00:06:33.749 size: 1.000366 MiB name: RG_ring_5_1238530 00:06:33.749 size: 0.125366 MiB name: RG_ring_2_1238530 00:06:33.749 size: 0.015991 MiB name: RG_ring_3_1238530 00:06:33.749 end memzones------- 00:06:33.749 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.749 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:33.749 list of free elements. size: 13.984680 MiB 00:06:33.749 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.749 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:33.749 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:33.749 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:33.749 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:33.749 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:33.749 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:33.749 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:33.749 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:33.749 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:33.749 element at address: 0x200003e00000 with size: 0.495605 MiB 00:06:33.749 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:33.749 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:33.749 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:33.749 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:33.749 element at address: 0x200003a00000 with size: 0.354858 MiB 00:06:33.749 list of standard malloc elements. size: 199.218628 MiB 00:06:33.749 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:33.749 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:33.749 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:33.749 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:33.749 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:33.749 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.749 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:33.749 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.749 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:33.749 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.749 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:33.749 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:33.750 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.750 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.750 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:33.750 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:33.750 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:33.750 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:33.750 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:33.750 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:33.750 list of memzone associated elements. size: 646.796692 MiB 00:06:33.750 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:33.750 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.750 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:33.750 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.750 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:33.750 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1238530_0 00:06:33.750 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.750 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1238530_0 00:06:33.750 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.750 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1238530_0 00:06:33.750 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:33.750 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1238530_0 00:06:33.750 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:33.750 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.750 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:33.750 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.750 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.750 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1238530 00:06:33.750 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.750 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1238530 00:06:33.750 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.750 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1238530 00:06:33.750 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:33.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.750 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:33.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.750 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:33.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.750 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:33.750 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.750 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.750 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1238530 00:06:33.750 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.750 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1238530 00:06:33.750 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:33.750 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1238530 00:06:33.750 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:33.750 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1238530 00:06:33.750 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:33.750 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1238530 00:06:33.750 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:33.750 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1238530 00:06:33.750 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:33.750 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.750 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:33.750 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.750 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:33.750 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.750 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:06:33.750 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1238530 00:06:33.750 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:33.750 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.750 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:33.750 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.750 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:06:33.750 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1238530 00:06:33.750 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:33.750 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.750 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:33.750 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1238530 00:06:33.750 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:33.750 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1238530 00:06:33.750 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:06:33.750 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1238530 00:06:33.750 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:33.750 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.750 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.750 14:22:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1238530 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1238530 ']' 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1238530 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238530 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238530' 00:06:33.750 killing process with pid 1238530 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1238530 00:06:33.750 14:22:25 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1238530 00:06:34.317 00:06:34.317 real 0m1.223s 00:06:34.317 user 0m1.203s 00:06:34.317 sys 0m0.445s 00:06:34.317 14:22:26 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.317 14:22:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.317 ************************************ 00:06:34.317 END TEST dpdk_mem_utility 00:06:34.317 ************************************ 00:06:34.317 14:22:26 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.317 14:22:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.317 14:22:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.317 14:22:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.317 ************************************ 00:06:34.317 START TEST event 00:06:34.317 ************************************ 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.317 * Looking for test storage... 00:06:34.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.317 14:22:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.317 14:22:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.317 14:22:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.317 14:22:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.317 14:22:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.317 14:22:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.317 14:22:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.317 14:22:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.317 14:22:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.317 14:22:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.317 14:22:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.317 14:22:26 event -- scripts/common.sh@344 -- # case "$op" in 00:06:34.317 14:22:26 event -- scripts/common.sh@345 -- # : 1 00:06:34.317 14:22:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.317 14:22:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.317 14:22:26 event -- scripts/common.sh@365 -- # decimal 1 00:06:34.317 14:22:26 event -- scripts/common.sh@353 -- # local d=1 00:06:34.317 14:22:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.317 14:22:26 event -- scripts/common.sh@355 -- # echo 1 00:06:34.317 14:22:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.317 14:22:26 event -- scripts/common.sh@366 -- # decimal 2 00:06:34.317 14:22:26 event -- scripts/common.sh@353 -- # local d=2 00:06:34.317 14:22:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.317 14:22:26 event -- scripts/common.sh@355 -- # echo 2 00:06:34.317 14:22:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.317 14:22:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.317 14:22:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.317 14:22:26 event -- scripts/common.sh@368 -- # return 0 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.317 --rc genhtml_branch_coverage=1 00:06:34.317 --rc genhtml_function_coverage=1 00:06:34.317 --rc genhtml_legend=1 00:06:34.317 --rc geninfo_all_blocks=1 00:06:34.317 --rc geninfo_unexecuted_blocks=1 00:06:34.317 00:06:34.317 ' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.317 --rc genhtml_branch_coverage=1 00:06:34.317 --rc genhtml_function_coverage=1 00:06:34.317 --rc genhtml_legend=1 00:06:34.317 --rc geninfo_all_blocks=1 00:06:34.317 --rc geninfo_unexecuted_blocks=1 00:06:34.317 00:06:34.317 ' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.317 --rc genhtml_branch_coverage=1 00:06:34.317 --rc genhtml_function_coverage=1 00:06:34.317 --rc genhtml_legend=1 00:06:34.317 --rc geninfo_all_blocks=1 00:06:34.317 --rc geninfo_unexecuted_blocks=1 00:06:34.317 00:06:34.317 ' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.317 --rc genhtml_branch_coverage=1 00:06:34.317 --rc genhtml_function_coverage=1 00:06:34.317 --rc genhtml_legend=1 00:06:34.317 --rc geninfo_all_blocks=1 00:06:34.317 --rc geninfo_unexecuted_blocks=1 00:06:34.317 00:06:34.317 ' 00:06:34.317 14:22:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:34.317 14:22:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.317 14:22:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:34.317 14:22:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.317 14:22:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.317 ************************************ 00:06:34.317 START TEST event_perf 00:06:34.317 ************************************ 00:06:34.317 14:22:26 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.317 Running I/O for 1 seconds...[2024-11-02 14:22:26.339585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.317 [2024-11-02 14:22:26.339650] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238733 ] 00:06:34.575 [2024-11-02 14:22:26.399429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.575 [2024-11-02 14:22:26.494401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.575 [2024-11-02 14:22:26.494455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.575 [2024-11-02 14:22:26.494572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.575 [2024-11-02 14:22:26.494575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.948 Running I/O for 1 seconds... 00:06:35.948 lcore 0: 229999 00:06:35.948 lcore 1: 230000 00:06:35.948 lcore 2: 229999 00:06:35.948 lcore 3: 229999 00:06:35.948 done. 00:06:35.948 00:06:35.948 real 0m1.253s 00:06:35.948 user 0m4.155s 00:06:35.948 sys 0m0.093s 00:06:35.948 14:22:27 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.948 14:22:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.948 ************************************ 00:06:35.948 END TEST event_perf 00:06:35.948 ************************************ 00:06:35.948 14:22:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.948 14:22:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:35.948 14:22:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.948 14:22:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.948 ************************************ 00:06:35.948 START TEST event_reactor 00:06:35.948 ************************************ 00:06:35.948 14:22:27 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.948 [2024-11-02 14:22:27.643499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.948 [2024-11-02 14:22:27.643578] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238890 ] 00:06:35.948 [2024-11-02 14:22:27.708546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.948 [2024-11-02 14:22:27.798681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.883 test_start 00:06:36.883 oneshot 00:06:36.883 tick 100 00:06:36.883 tick 100 00:06:36.883 tick 250 00:06:36.883 tick 100 00:06:36.883 tick 100 00:06:36.883 tick 100 00:06:36.883 tick 250 00:06:36.883 tick 500 00:06:36.883 tick 100 00:06:36.883 tick 100 00:06:36.883 tick 250 00:06:36.883 tick 100 00:06:36.883 tick 100 00:06:36.883 test_end 00:06:36.883 00:06:36.883 real 0m1.253s 00:06:36.883 user 0m1.168s 00:06:36.883 sys 0m0.080s 00:06:36.883 14:22:28 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.883 14:22:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:36.883 ************************************ 00:06:36.883 END TEST event_reactor 00:06:36.883 ************************************ 00:06:36.883 14:22:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.883 14:22:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:36.883 14:22:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.883 14:22:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.883 ************************************ 00:06:36.883 START TEST event_reactor_perf 00:06:36.883 ************************************ 00:06:36.883 14:22:28 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.144 [2024-11-02 14:22:28.946020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.144 [2024-11-02 14:22:28.946087] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239042 ] 00:06:37.144 [2024-11-02 14:22:29.008151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.144 [2024-11-02 14:22:29.102955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.518 test_start 00:06:38.518 test_end 00:06:38.518 Performance: 354699 events per second 00:06:38.518 00:06:38.518 real 0m1.251s 00:06:38.518 user 0m1.163s 00:06:38.518 sys 0m0.082s 00:06:38.518 14:22:30 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.518 14:22:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 ************************************ 00:06:38.518 END TEST event_reactor_perf 00:06:38.518 ************************************ 00:06:38.518 14:22:30 event -- event/event.sh@49 -- # uname -s 00:06:38.518 14:22:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:38.518 14:22:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.518 14:22:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.518 14:22:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.518 14:22:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 ************************************ 00:06:38.518 START TEST event_scheduler 00:06:38.518 ************************************ 00:06:38.518 14:22:30 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.518 * Looking for test storage... 00:06:38.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:38.518 14:22:30 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.518 14:22:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.518 14:22:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.518 14:22:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.518 14:22:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.519 14:22:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:38.519 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1239239 00:06:38.519 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:38.519 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.519 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1239239 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1239239 ']' 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.519 14:22:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 [2024-11-02 14:22:30.415635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.519 [2024-11-02 14:22:30.415742] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239239 ] 00:06:38.519 [2024-11-02 14:22:30.475220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.519 [2024-11-02 14:22:30.571251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.519 [2024-11-02 14:22:30.571318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.519 [2024-11-02 14:22:30.571376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.519 [2024-11-02 14:22:30.571379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:38.777 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 [2024-11-02 14:22:30.668273] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:38.777 [2024-11-02 14:22:30.668300] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:38.777 [2024-11-02 14:22:30.668318] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:38.777 [2024-11-02 14:22:30.668329] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:38.777 [2024-11-02 14:22:30.668339] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.777 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 [2024-11-02 14:22:30.765451] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.777 14:22:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 ************************************ 00:06:38.777 START TEST scheduler_create_thread 00:06:38.777 ************************************ 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 2 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 3 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 4 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.777 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 5 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 6 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.035 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.036 7 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.036 8 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.036 9 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.036 10 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.036 14:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.601 14:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.601 14:22:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:39.601 14:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.601 14:22:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.972 14:22:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.972 14:22:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:40.972 14:22:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:40.972 14:22:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.972 14:22:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.905 14:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.905 00:06:41.905 real 0m3.102s 00:06:41.905 user 0m0.009s 00:06:41.905 sys 0m0.006s 00:06:41.905 14:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.905 14:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.905 ************************************ 00:06:41.905 END TEST scheduler_create_thread 00:06:41.905 ************************************ 00:06:41.905 14:22:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:41.905 14:22:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1239239 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1239239 ']' 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1239239 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239239 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239239' 00:06:41.905 killing process with pid 1239239 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1239239 00:06:41.905 14:22:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1239239 00:06:42.470 [2024-11-02 14:22:34.277448] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:42.729 00:06:42.729 real 0m4.302s 00:06:42.729 user 0m7.054s 00:06:42.729 sys 0m0.353s 00:06:42.729 14:22:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.729 14:22:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 END TEST event_scheduler 00:06:42.729 ************************************ 00:06:42.729 14:22:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:42.729 14:22:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:42.729 14:22:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.729 14:22:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.729 14:22:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 START TEST app_repeat 00:06:42.729 ************************************ 00:06:42.729 14:22:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1239814 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.729 14:22:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1239814' 00:06:42.729 Process app_repeat pid: 1239814 00:06:42.730 14:22:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.730 14:22:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:42.730 spdk_app_start Round 0 00:06:42.730 14:22:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1239814 /var/tmp/spdk-nbd.sock 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1239814 ']' 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.730 14:22:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.730 [2024-11-02 14:22:34.614642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.730 [2024-11-02 14:22:34.614702] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239814 ] 00:06:42.730 [2024-11-02 14:22:34.677138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.730 [2024-11-02 14:22:34.768833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.730 [2024-11-02 14:22:34.768839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.988 14:22:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.988 14:22:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:42.988 14:22:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.246 Malloc0 00:06:43.246 14:22:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.504 Malloc1 00:06:43.504 14:22:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.504 14:22:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.762 /dev/nbd0 00:06:43.762 14:22:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.762 14:22:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.762 1+0 records in 00:06:43.762 1+0 records out 00:06:43.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190256 s, 21.5 MB/s 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.762 14:22:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.762 14:22:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.762 14:22:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.762 14:22:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.327 /dev/nbd1 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.327 1+0 records in 00:06:44.327 1+0 records out 00:06:44.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202917 s, 20.2 MB/s 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:44.327 14:22:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.327 14:22:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.585 { 00:06:44.585 "nbd_device": "/dev/nbd0", 00:06:44.585 "bdev_name": "Malloc0" 00:06:44.585 }, 00:06:44.585 { 00:06:44.585 "nbd_device": "/dev/nbd1", 00:06:44.585 "bdev_name": "Malloc1" 00:06:44.585 } 00:06:44.585 ]' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.585 { 00:06:44.585 "nbd_device": "/dev/nbd0", 00:06:44.585 "bdev_name": "Malloc0" 00:06:44.585 }, 00:06:44.585 { 00:06:44.585 "nbd_device": "/dev/nbd1", 00:06:44.585 "bdev_name": "Malloc1" 00:06:44.585 } 00:06:44.585 ]' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.585 /dev/nbd1' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.585 /dev/nbd1' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.585 256+0 records in 00:06:44.585 256+0 records out 00:06:44.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407153 s, 258 MB/s 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.585 256+0 records in 00:06:44.585 256+0 records out 00:06:44.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226458 s, 46.3 MB/s 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.585 256+0 records in 00:06:44.585 256+0 records out 00:06:44.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218205 s, 48.1 MB/s 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.585 14:22:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.586 14:22:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.586 14:22:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.843 14:22:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.844 14:22:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.101 14:22:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.358 14:22:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.358 14:22:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.358 14:22:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.616 14:22:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.616 14:22:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.874 14:22:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.133 [2024-11-02 14:22:37.939407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.133 [2024-11-02 14:22:38.029660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.133 [2024-11-02 14:22:38.029667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.133 [2024-11-02 14:22:38.090112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.133 [2024-11-02 14:22:38.090182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.659 14:22:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.659 14:22:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:48.659 spdk_app_start Round 1 00:06:48.659 14:22:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1239814 /var/tmp/spdk-nbd.sock 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1239814 ']' 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.659 14:22:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.917 14:22:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.917 14:22:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:49.176 14:22:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.433 Malloc0 00:06:49.433 14:22:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.691 Malloc1 00:06:49.691 14:22:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.691 14:22:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.692 14:22:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.949 /dev/nbd0 00:06:49.949 14:22:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.949 14:22:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.949 1+0 records in 00:06:49.949 1+0 records out 00:06:49.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236668 s, 17.3 MB/s 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.949 14:22:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.949 14:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.949 14:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.949 14:22:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.207 /dev/nbd1 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.207 1+0 records in 00:06:50.207 1+0 records out 00:06:50.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196092 s, 20.9 MB/s 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.207 14:22:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.207 14:22:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.465 { 00:06:50.465 "nbd_device": "/dev/nbd0", 00:06:50.465 "bdev_name": "Malloc0" 00:06:50.465 }, 00:06:50.465 { 00:06:50.465 "nbd_device": "/dev/nbd1", 00:06:50.465 "bdev_name": "Malloc1" 00:06:50.465 } 00:06:50.465 ]' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.465 { 00:06:50.465 "nbd_device": "/dev/nbd0", 00:06:50.465 "bdev_name": "Malloc0" 00:06:50.465 }, 00:06:50.465 { 00:06:50.465 "nbd_device": "/dev/nbd1", 00:06:50.465 "bdev_name": "Malloc1" 00:06:50.465 } 00:06:50.465 ]' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.465 /dev/nbd1' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.465 /dev/nbd1' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.465 256+0 records in 00:06:50.465 256+0 records out 00:06:50.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515116 s, 204 MB/s 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.465 14:22:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.723 256+0 records in 00:06:50.723 256+0 records out 00:06:50.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208802 s, 50.2 MB/s 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.723 256+0 records in 00:06:50.723 256+0 records out 00:06:50.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238222 s, 44.0 MB/s 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.723 14:22:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.981 14:22:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.238 14:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.238 14:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.239 14:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.496 14:22:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.496 14:22:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.754 14:22:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.013 [2024-11-02 14:22:43.984477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.271 [2024-11-02 14:22:44.076211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.271 [2024-11-02 14:22:44.076215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.271 [2024-11-02 14:22:44.139007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.271 [2024-11-02 14:22:44.139093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.795 14:22:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.795 14:22:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.795 spdk_app_start Round 2 00:06:54.795 14:22:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1239814 /var/tmp/spdk-nbd.sock 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1239814 ']' 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.795 14:22:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.053 14:22:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.053 14:22:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:55.053 14:22:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.311 Malloc0 00:06:55.311 14:22:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.569 Malloc1 00:06:55.569 14:22:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.569 14:22:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.135 /dev/nbd0 00:06:56.135 14:22:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.135 14:22:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.135 1+0 records in 00:06:56.135 1+0 records out 00:06:56.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163696 s, 25.0 MB/s 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.135 14:22:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.135 14:22:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.135 14:22:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.135 14:22:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.392 /dev/nbd1 00:06:56.392 14:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.393 1+0 records in 00:06:56.393 1+0 records out 00:06:56.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020486 s, 20.0 MB/s 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.393 14:22:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.393 14:22:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.684 { 00:06:56.684 "nbd_device": "/dev/nbd0", 00:06:56.684 "bdev_name": "Malloc0" 00:06:56.684 }, 00:06:56.684 { 00:06:56.684 "nbd_device": "/dev/nbd1", 00:06:56.684 "bdev_name": "Malloc1" 00:06:56.684 } 00:06:56.684 ]' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.684 { 00:06:56.684 "nbd_device": "/dev/nbd0", 00:06:56.684 "bdev_name": "Malloc0" 00:06:56.684 }, 00:06:56.684 { 00:06:56.684 "nbd_device": "/dev/nbd1", 00:06:56.684 "bdev_name": "Malloc1" 00:06:56.684 } 00:06:56.684 ]' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.684 /dev/nbd1' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.684 /dev/nbd1' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.684 256+0 records in 00:06:56.684 256+0 records out 00:06:56.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499478 s, 210 MB/s 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.684 256+0 records in 00:06:56.684 256+0 records out 00:06:56.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227557 s, 46.1 MB/s 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.684 256+0 records in 00:06:56.684 256+0 records out 00:06:56.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239481 s, 43.8 MB/s 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.684 14:22:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.968 14:22:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.226 14:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.483 14:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.483 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.483 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.742 14:22:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.742 14:22:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.009 14:22:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.267 [2024-11-02 14:22:50.092342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.267 [2024-11-02 14:22:50.185755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.267 [2024-11-02 14:22:50.185755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.267 [2024-11-02 14:22:50.250146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.267 [2024-11-02 14:22:50.250234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.544 14:22:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1239814 /var/tmp/spdk-nbd.sock 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1239814 ']' 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.544 14:22:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.544 14:22:53 event.app_repeat -- event/event.sh@39 -- # killprocess 1239814 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1239814 ']' 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1239814 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239814 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239814' 00:07:01.544 killing process with pid 1239814 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1239814 00:07:01.544 14:22:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1239814 00:07:01.544 spdk_app_start is called in Round 0. 00:07:01.544 Shutdown signal received, stop current app iteration 00:07:01.544 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:01.544 spdk_app_start is called in Round 1. 00:07:01.544 Shutdown signal received, stop current app iteration 00:07:01.545 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:01.545 spdk_app_start is called in Round 2. 00:07:01.545 Shutdown signal received, stop current app iteration 00:07:01.545 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:01.545 spdk_app_start is called in Round 3. 00:07:01.545 Shutdown signal received, stop current app iteration 00:07:01.545 14:22:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:01.545 14:22:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:01.545 00:07:01.545 real 0m18.815s 00:07:01.545 user 0m41.287s 00:07:01.545 sys 0m3.297s 00:07:01.545 14:22:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.545 14:22:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 ************************************ 00:07:01.545 END TEST app_repeat 00:07:01.545 ************************************ 00:07:01.545 14:22:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:01.545 14:22:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.545 14:22:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.545 14:22:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.545 14:22:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 ************************************ 00:07:01.545 START TEST cpu_locks 00:07:01.545 ************************************ 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.545 * Looking for test storage... 00:07:01.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.545 14:22:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:01.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.545 --rc genhtml_branch_coverage=1 00:07:01.545 --rc genhtml_function_coverage=1 00:07:01.545 --rc genhtml_legend=1 00:07:01.545 --rc geninfo_all_blocks=1 00:07:01.545 --rc geninfo_unexecuted_blocks=1 00:07:01.545 00:07:01.545 ' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:01.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.545 --rc genhtml_branch_coverage=1 00:07:01.545 --rc genhtml_function_coverage=1 00:07:01.545 --rc genhtml_legend=1 00:07:01.545 --rc geninfo_all_blocks=1 00:07:01.545 --rc geninfo_unexecuted_blocks=1 00:07:01.545 00:07:01.545 ' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:01.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.545 --rc genhtml_branch_coverage=1 00:07:01.545 --rc genhtml_function_coverage=1 00:07:01.545 --rc genhtml_legend=1 00:07:01.545 --rc geninfo_all_blocks=1 00:07:01.545 --rc geninfo_unexecuted_blocks=1 00:07:01.545 00:07:01.545 ' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:01.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.545 --rc genhtml_branch_coverage=1 00:07:01.545 --rc genhtml_function_coverage=1 00:07:01.545 --rc genhtml_legend=1 00:07:01.545 --rc geninfo_all_blocks=1 00:07:01.545 --rc geninfo_unexecuted_blocks=1 00:07:01.545 00:07:01.545 ' 00:07:01.545 14:22:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:01.545 14:22:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:01.545 14:22:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:01.545 14:22:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.545 14:22:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.804 ************************************ 00:07:01.804 START TEST default_locks 00:07:01.804 ************************************ 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1242305 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1242305 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1242305 ']' 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.804 14:22:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.804 [2024-11-02 14:22:53.679466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.804 [2024-11-02 14:22:53.679545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242305 ] 00:07:01.804 [2024-11-02 14:22:53.741871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.804 [2024-11-02 14:22:53.832148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.062 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.062 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:02.062 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1242305 00:07:02.062 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1242305 00:07:02.062 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.627 lslocks: write error 00:07:02.627 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1242305 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1242305 ']' 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1242305 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242305 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242305' 00:07:02.628 killing process with pid 1242305 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1242305 00:07:02.628 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1242305 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1242305 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1242305 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1242305 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1242305 ']' 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1242305) - No such process 00:07:02.886 ERROR: process (pid: 1242305) is no longer running 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.886 00:07:02.886 real 0m1.265s 00:07:02.886 user 0m1.184s 00:07:02.886 sys 0m0.584s 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.886 14:22:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.886 ************************************ 00:07:02.886 END TEST default_locks 00:07:02.886 ************************************ 00:07:02.886 14:22:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:02.886 14:22:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.886 14:22:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.886 14:22:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.886 ************************************ 00:07:02.886 START TEST default_locks_via_rpc 00:07:02.886 ************************************ 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1242476 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1242476 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1242476 ']' 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.886 14:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.145 [2024-11-02 14:22:54.987468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.145 [2024-11-02 14:22:54.987546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242476 ] 00:07:03.145 [2024-11-02 14:22:55.047369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.145 [2024-11-02 14:22:55.137377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1242476 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1242476 00:07:03.403 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1242476 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1242476 ']' 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1242476 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242476 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.969 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242476' 00:07:03.969 killing process with pid 1242476 00:07:03.970 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1242476 00:07:03.970 14:22:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1242476 00:07:04.228 00:07:04.228 real 0m1.258s 00:07:04.228 user 0m1.203s 00:07:04.228 sys 0m0.546s 00:07:04.228 14:22:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.228 14:22:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.228 ************************************ 00:07:04.228 END TEST default_locks_via_rpc 00:07:04.228 ************************************ 00:07:04.228 14:22:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:04.228 14:22:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.228 14:22:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.228 14:22:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.228 ************************************ 00:07:04.228 START TEST non_locking_app_on_locked_coremask 00:07:04.228 ************************************ 00:07:04.228 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:04.228 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1242636 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1242636 /var/tmp/spdk.sock 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1242636 ']' 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.229 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.487 [2024-11-02 14:22:56.304031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.488 [2024-11-02 14:22:56.304129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242636 ] 00:07:04.488 [2024-11-02 14:22:56.361982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.488 [2024-11-02 14:22:56.450996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1242765 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1242765 /var/tmp/spdk2.sock 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1242765 ']' 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.746 14:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.746 [2024-11-02 14:22:56.767632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.746 [2024-11-02 14:22:56.767717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242765 ] 00:07:05.004 [2024-11-02 14:22:56.859993] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.004 [2024-11-02 14:22:56.860026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.004 [2024-11-02 14:22:57.048898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.938 14:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.938 14:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:05.938 14:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1242636 00:07:05.938 14:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1242636 00:07:05.938 14:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.196 lslocks: write error 00:07:06.196 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1242636 00:07:06.196 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1242636 ']' 00:07:06.196 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1242636 00:07:06.196 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:06.196 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242636 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242636' 00:07:06.197 killing process with pid 1242636 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1242636 00:07:06.197 14:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1242636 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1242765 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1242765 ']' 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1242765 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242765 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242765' 00:07:07.130 killing process with pid 1242765 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1242765 00:07:07.130 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1242765 00:07:07.697 00:07:07.697 real 0m3.253s 00:07:07.697 user 0m3.458s 00:07:07.697 sys 0m1.091s 00:07:07.697 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.697 14:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.697 ************************************ 00:07:07.697 END TEST non_locking_app_on_locked_coremask 00:07:07.697 ************************************ 00:07:07.697 14:22:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:07.697 14:22:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.697 14:22:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.697 14:22:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.697 ************************************ 00:07:07.697 START TEST locking_app_on_unlocked_coremask 00:07:07.697 ************************************ 00:07:07.697 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:07.697 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1243070 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1243070 /var/tmp/spdk.sock 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243070 ']' 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.698 14:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.698 [2024-11-02 14:22:59.606433] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.698 [2024-11-02 14:22:59.606511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243070 ] 00:07:07.698 [2024-11-02 14:22:59.669402] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.698 [2024-11-02 14:22:59.669460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.962 [2024-11-02 14:22:59.760120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1243199 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1243199 /var/tmp/spdk2.sock 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243199 ']' 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.220 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.221 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.221 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.221 14:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.221 [2024-11-02 14:23:00.103663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.221 [2024-11-02 14:23:00.103773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243199 ] 00:07:08.221 [2024-11-02 14:23:00.194226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.479 [2024-11-02 14:23:00.378856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.412 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.412 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.412 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1243199 00:07:09.412 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1243199 00:07:09.412 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.670 lslocks: write error 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1243070 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1243070 ']' 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1243070 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243070 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243070' 00:07:09.670 killing process with pid 1243070 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1243070 00:07:09.670 14:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1243070 00:07:10.604 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1243199 00:07:10.604 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1243199 ']' 00:07:10.604 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1243199 00:07:10.604 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243199 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243199' 00:07:10.605 killing process with pid 1243199 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1243199 00:07:10.605 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1243199 00:07:11.170 00:07:11.170 real 0m3.412s 00:07:11.170 user 0m3.620s 00:07:11.170 sys 0m1.126s 00:07:11.170 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.170 14:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.170 ************************************ 00:07:11.170 END TEST locking_app_on_unlocked_coremask 00:07:11.170 ************************************ 00:07:11.170 14:23:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:11.170 14:23:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.170 14:23:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.170 14:23:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.170 ************************************ 00:07:11.170 START TEST locking_app_on_locked_coremask 00:07:11.170 ************************************ 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1243527 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1243527 /var/tmp/spdk.sock 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243527 ']' 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.170 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.170 [2024-11-02 14:23:03.071075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.170 [2024-11-02 14:23:03.071168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243527 ] 00:07:11.170 [2024-11-02 14:23:03.135510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.428 [2024-11-02 14:23:03.230754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.686 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.686 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.686 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1243652 00:07:11.686 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1243652 /var/tmp/spdk2.sock 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1243652 /var/tmp/spdk2.sock 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1243652 /var/tmp/spdk2.sock 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243652 ']' 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.687 14:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.687 [2024-11-02 14:23:03.556918] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.687 [2024-11-02 14:23:03.557002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243652 ] 00:07:11.687 [2024-11-02 14:23:03.652456] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1243527 has claimed it. 00:07:11.687 [2024-11-02 14:23:03.652533] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1243652) - No such process 00:07:12.252 ERROR: process (pid: 1243652) is no longer running 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1243527 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1243527 00:07:12.252 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.818 lslocks: write error 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1243527 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1243527 ']' 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1243527 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243527 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243527' 00:07:12.818 killing process with pid 1243527 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1243527 00:07:12.818 14:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1243527 00:07:13.076 00:07:13.076 real 0m2.092s 00:07:13.076 user 0m2.299s 00:07:13.076 sys 0m0.665s 00:07:13.076 14:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.076 14:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.076 ************************************ 00:07:13.076 END TEST locking_app_on_locked_coremask 00:07:13.076 ************************************ 00:07:13.076 14:23:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.076 14:23:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.076 14:23:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.076 14:23:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.335 ************************************ 00:07:13.335 START TEST locking_overlapped_coremask 00:07:13.335 ************************************ 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1243825 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1243825 /var/tmp/spdk.sock 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243825 ']' 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.335 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.335 [2024-11-02 14:23:05.213076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.335 [2024-11-02 14:23:05.213164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243825 ] 00:07:13.335 [2024-11-02 14:23:05.276763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.335 [2024-11-02 14:23:05.366872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.335 [2024-11-02 14:23:05.366942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.335 [2024-11-02 14:23:05.366945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1243949 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1243949 /var/tmp/spdk2.sock 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1243949 /var/tmp/spdk2.sock 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1243949 /var/tmp/spdk2.sock 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1243949 ']' 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.593 14:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.850 [2024-11-02 14:23:05.691708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.850 [2024-11-02 14:23:05.691792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243949 ] 00:07:13.850 [2024-11-02 14:23:05.780520] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1243825 has claimed it. 00:07:13.850 [2024-11-02 14:23:05.780578] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1243949) - No such process 00:07:14.415 ERROR: process (pid: 1243949) is no longer running 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1243825 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1243825 ']' 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1243825 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243825 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243825' 00:07:14.415 killing process with pid 1243825 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1243825 00:07:14.415 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1243825 00:07:14.981 00:07:14.981 real 0m1.698s 00:07:14.981 user 0m4.626s 00:07:14.981 sys 0m0.477s 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.981 ************************************ 00:07:14.981 END TEST locking_overlapped_coremask 00:07:14.981 ************************************ 00:07:14.981 14:23:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.981 14:23:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.981 14:23:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.981 14:23:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.981 ************************************ 00:07:14.981 START TEST locking_overlapped_coremask_via_rpc 00:07:14.981 ************************************ 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1244120 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1244120 /var/tmp/spdk.sock 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1244120 ']' 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.981 14:23:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.981 [2024-11-02 14:23:06.963020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.981 [2024-11-02 14:23:06.963103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244120 ] 00:07:14.981 [2024-11-02 14:23:07.026705] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.981 [2024-11-02 14:23:07.026745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.239 [2024-11-02 14:23:07.116064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.239 [2024-11-02 14:23:07.116132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.239 [2024-11-02 14:23:07.116135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1244130 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1244130 /var/tmp/spdk2.sock 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1244130 ']' 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.497 14:23:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.497 [2024-11-02 14:23:07.439952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.497 [2024-11-02 14:23:07.440037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244130 ] 00:07:15.497 [2024-11-02 14:23:07.527423] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.497 [2024-11-02 14:23:07.527469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.755 [2024-11-02 14:23:07.704460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.755 [2024-11-02 14:23:07.704522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.755 [2024-11-02 14:23:07.704524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.688 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.688 [2024-11-02 14:23:08.433351] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1244120 has claimed it. 00:07:16.688 request: 00:07:16.688 { 00:07:16.688 "method": "framework_enable_cpumask_locks", 00:07:16.688 "req_id": 1 00:07:16.688 } 00:07:16.688 Got JSON-RPC error response 00:07:16.688 response: 00:07:16.688 { 00:07:16.689 "code": -32603, 00:07:16.689 "message": "Failed to claim CPU core: 2" 00:07:16.689 } 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1244120 /var/tmp/spdk.sock 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1244120 ']' 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1244130 /var/tmp/spdk2.sock 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1244130 ']' 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.689 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.946 00:07:16.946 real 0m2.075s 00:07:16.946 user 0m1.119s 00:07:16.946 sys 0m0.179s 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.946 14:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.946 ************************************ 00:07:16.946 END TEST locking_overlapped_coremask_via_rpc 00:07:16.946 ************************************ 00:07:16.946 14:23:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.946 14:23:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1244120 ]] 00:07:16.946 14:23:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1244120 00:07:16.946 14:23:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1244120 ']' 00:07:16.946 14:23:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1244120 00:07:16.946 14:23:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1244120 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1244120' 00:07:17.205 killing process with pid 1244120 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1244120 00:07:17.205 14:23:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1244120 00:07:17.463 14:23:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1244130 ]] 00:07:17.463 14:23:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1244130 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1244130 ']' 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1244130 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1244130 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1244130' 00:07:17.463 killing process with pid 1244130 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1244130 00:07:17.463 14:23:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1244130 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1244120 ]] 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1244120 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1244120 ']' 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1244120 00:07:18.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1244120) - No such process 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1244120 is not found' 00:07:18.028 Process with pid 1244120 is not found 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1244130 ]] 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1244130 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1244130 ']' 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1244130 00:07:18.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1244130) - No such process 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1244130 is not found' 00:07:18.028 Process with pid 1244130 is not found 00:07:18.028 14:23:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.028 00:07:18.028 real 0m16.473s 00:07:18.028 user 0m28.986s 00:07:18.028 sys 0m5.636s 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.028 14:23:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.028 ************************************ 00:07:18.028 END TEST cpu_locks 00:07:18.028 ************************************ 00:07:18.028 00:07:18.028 real 0m43.794s 00:07:18.028 user 1m24.036s 00:07:18.028 sys 0m9.790s 00:07:18.028 14:23:09 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.028 14:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.028 ************************************ 00:07:18.028 END TEST event 00:07:18.028 ************************************ 00:07:18.028 14:23:09 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.028 14:23:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.028 14:23:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.028 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.028 ************************************ 00:07:18.028 START TEST thread 00:07:18.028 ************************************ 00:07:18.028 14:23:09 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.028 * Looking for test storage... 00:07:18.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:18.028 14:23:10 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:18.028 14:23:10 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:18.028 14:23:10 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:18.286 14:23:10 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:18.286 14:23:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.286 14:23:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.286 14:23:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.286 14:23:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.286 14:23:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.286 14:23:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.286 14:23:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.286 14:23:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.286 14:23:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.286 14:23:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.286 14:23:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.286 14:23:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:18.287 14:23:10 thread -- scripts/common.sh@345 -- # : 1 00:07:18.287 14:23:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.287 14:23:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.287 14:23:10 thread -- scripts/common.sh@365 -- # decimal 1 00:07:18.287 14:23:10 thread -- scripts/common.sh@353 -- # local d=1 00:07:18.287 14:23:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.287 14:23:10 thread -- scripts/common.sh@355 -- # echo 1 00:07:18.287 14:23:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.287 14:23:10 thread -- scripts/common.sh@366 -- # decimal 2 00:07:18.287 14:23:10 thread -- scripts/common.sh@353 -- # local d=2 00:07:18.287 14:23:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.287 14:23:10 thread -- scripts/common.sh@355 -- # echo 2 00:07:18.287 14:23:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.287 14:23:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.287 14:23:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.287 14:23:10 thread -- scripts/common.sh@368 -- # return 0 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.287 --rc genhtml_branch_coverage=1 00:07:18.287 --rc genhtml_function_coverage=1 00:07:18.287 --rc genhtml_legend=1 00:07:18.287 --rc geninfo_all_blocks=1 00:07:18.287 --rc geninfo_unexecuted_blocks=1 00:07:18.287 00:07:18.287 ' 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.287 --rc genhtml_branch_coverage=1 00:07:18.287 --rc genhtml_function_coverage=1 00:07:18.287 --rc genhtml_legend=1 00:07:18.287 --rc geninfo_all_blocks=1 00:07:18.287 --rc geninfo_unexecuted_blocks=1 00:07:18.287 00:07:18.287 ' 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.287 --rc genhtml_branch_coverage=1 00:07:18.287 --rc genhtml_function_coverage=1 00:07:18.287 --rc genhtml_legend=1 00:07:18.287 --rc geninfo_all_blocks=1 00:07:18.287 --rc geninfo_unexecuted_blocks=1 00:07:18.287 00:07:18.287 ' 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.287 --rc genhtml_branch_coverage=1 00:07:18.287 --rc genhtml_function_coverage=1 00:07:18.287 --rc genhtml_legend=1 00:07:18.287 --rc geninfo_all_blocks=1 00:07:18.287 --rc geninfo_unexecuted_blocks=1 00:07:18.287 00:07:18.287 ' 00:07:18.287 14:23:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.287 14:23:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.287 ************************************ 00:07:18.287 START TEST thread_poller_perf 00:07:18.287 ************************************ 00:07:18.287 14:23:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.287 [2024-11-02 14:23:10.181437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.287 [2024-11-02 14:23:10.181496] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244624 ] 00:07:18.287 [2024-11-02 14:23:10.244405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.287 [2024-11-02 14:23:10.335247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.287 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.659 [2024-11-02T13:23:11.714Z] ====================================== 00:07:19.659 [2024-11-02T13:23:11.714Z] busy:2711062943 (cyc) 00:07:19.659 [2024-11-02T13:23:11.714Z] total_run_count: 291000 00:07:19.659 [2024-11-02T13:23:11.714Z] tsc_hz: 2700000000 (cyc) 00:07:19.659 [2024-11-02T13:23:11.714Z] ====================================== 00:07:19.659 [2024-11-02T13:23:11.714Z] poller_cost: 9316 (cyc), 3450 (nsec) 00:07:19.659 00:07:19.659 real 0m1.255s 00:07:19.659 user 0m1.171s 00:07:19.659 sys 0m0.078s 00:07:19.659 14:23:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.659 14:23:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:19.659 ************************************ 00:07:19.659 END TEST thread_poller_perf 00:07:19.659 ************************************ 00:07:19.659 14:23:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.659 14:23:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.660 14:23:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.660 14:23:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.660 ************************************ 00:07:19.660 START TEST thread_poller_perf 00:07:19.660 ************************************ 00:07:19.660 14:23:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.660 [2024-11-02 14:23:11.488576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.660 [2024-11-02 14:23:11.488641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244783 ] 00:07:19.660 [2024-11-02 14:23:11.554317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.660 [2024-11-02 14:23:11.647856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.660 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.032 [2024-11-02T13:23:13.087Z] ====================================== 00:07:21.032 [2024-11-02T13:23:13.087Z] busy:2702925968 (cyc) 00:07:21.032 [2024-11-02T13:23:13.087Z] total_run_count: 3862000 00:07:21.032 [2024-11-02T13:23:13.087Z] tsc_hz: 2700000000 (cyc) 00:07:21.032 [2024-11-02T13:23:13.087Z] ====================================== 00:07:21.032 [2024-11-02T13:23:13.087Z] poller_cost: 699 (cyc), 258 (nsec) 00:07:21.032 00:07:21.032 real 0m1.256s 00:07:21.032 user 0m1.164s 00:07:21.032 sys 0m0.085s 00:07:21.032 14:23:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.032 14:23:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.032 ************************************ 00:07:21.032 END TEST thread_poller_perf 00:07:21.032 ************************************ 00:07:21.032 14:23:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:21.032 00:07:21.032 real 0m2.754s 00:07:21.032 user 0m2.472s 00:07:21.032 sys 0m0.284s 00:07:21.032 14:23:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.032 14:23:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.032 ************************************ 00:07:21.032 END TEST thread 00:07:21.032 ************************************ 00:07:21.032 14:23:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:21.032 14:23:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.032 14:23:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.032 14:23:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.032 14:23:12 -- common/autotest_common.sh@10 -- # set +x 00:07:21.032 ************************************ 00:07:21.032 START TEST app_cmdline 00:07:21.032 ************************************ 00:07:21.032 14:23:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.032 * Looking for test storage... 00:07:21.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:21.032 14:23:12 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:21.032 14:23:12 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:21.032 14:23:12 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:21.032 14:23:12 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.032 14:23:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.033 14:23:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:21.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.033 --rc genhtml_branch_coverage=1 00:07:21.033 --rc genhtml_function_coverage=1 00:07:21.033 --rc genhtml_legend=1 00:07:21.033 --rc geninfo_all_blocks=1 00:07:21.033 --rc geninfo_unexecuted_blocks=1 00:07:21.033 00:07:21.033 ' 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:21.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.033 --rc genhtml_branch_coverage=1 00:07:21.033 --rc genhtml_function_coverage=1 00:07:21.033 --rc genhtml_legend=1 00:07:21.033 --rc geninfo_all_blocks=1 00:07:21.033 --rc geninfo_unexecuted_blocks=1 00:07:21.033 00:07:21.033 ' 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:21.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.033 --rc genhtml_branch_coverage=1 00:07:21.033 --rc genhtml_function_coverage=1 00:07:21.033 --rc genhtml_legend=1 00:07:21.033 --rc geninfo_all_blocks=1 00:07:21.033 --rc geninfo_unexecuted_blocks=1 00:07:21.033 00:07:21.033 ' 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:21.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.033 --rc genhtml_branch_coverage=1 00:07:21.033 --rc genhtml_function_coverage=1 00:07:21.033 --rc genhtml_legend=1 00:07:21.033 --rc geninfo_all_blocks=1 00:07:21.033 --rc geninfo_unexecuted_blocks=1 00:07:21.033 00:07:21.033 ' 00:07:21.033 14:23:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:21.033 14:23:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1244991 00:07:21.033 14:23:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:21.033 14:23:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1244991 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1244991 ']' 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.033 14:23:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 [2024-11-02 14:23:13.003969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:21.033 [2024-11-02 14:23:13.004065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244991 ] 00:07:21.033 [2024-11-02 14:23:13.061814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.291 [2024-11-02 14:23:13.149911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.549 14:23:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.549 14:23:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:21.549 14:23:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:21.807 { 00:07:21.807 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:21.807 "fields": { 00:07:21.807 "major": 24, 00:07:21.807 "minor": 9, 00:07:21.807 "patch": 1, 00:07:21.807 "suffix": "-pre", 00:07:21.807 "commit": "b18e1bd62" 00:07:21.807 } 00:07:21.807 } 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:21.807 14:23:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.807 14:23:13 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.065 request: 00:07:22.065 { 00:07:22.065 "method": "env_dpdk_get_mem_stats", 00:07:22.065 "req_id": 1 00:07:22.065 } 00:07:22.065 Got JSON-RPC error response 00:07:22.065 response: 00:07:22.065 { 00:07:22.065 "code": -32601, 00:07:22.065 "message": "Method not found" 00:07:22.065 } 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.065 14:23:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1244991 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1244991 ']' 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1244991 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1244991 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1244991' 00:07:22.065 killing process with pid 1244991 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@969 -- # kill 1244991 00:07:22.065 14:23:14 app_cmdline -- common/autotest_common.sh@974 -- # wait 1244991 00:07:22.631 00:07:22.631 real 0m1.717s 00:07:22.631 user 0m2.104s 00:07:22.631 sys 0m0.521s 00:07:22.631 14:23:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.631 14:23:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 ************************************ 00:07:22.631 END TEST app_cmdline 00:07:22.631 ************************************ 00:07:22.631 14:23:14 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:22.631 14:23:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.631 14:23:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.631 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 ************************************ 00:07:22.631 START TEST version 00:07:22.631 ************************************ 00:07:22.631 14:23:14 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:22.631 * Looking for test storage... 00:07:22.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.631 14:23:14 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.631 14:23:14 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.631 14:23:14 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.917 14:23:14 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.917 14:23:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.917 14:23:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.917 14:23:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.917 14:23:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.917 14:23:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.917 14:23:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.917 14:23:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.917 14:23:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.917 14:23:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.917 14:23:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.917 14:23:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.917 14:23:14 version -- scripts/common.sh@344 -- # case "$op" in 00:07:22.917 14:23:14 version -- scripts/common.sh@345 -- # : 1 00:07:22.917 14:23:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.917 14:23:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.917 14:23:14 version -- scripts/common.sh@365 -- # decimal 1 00:07:22.917 14:23:14 version -- scripts/common.sh@353 -- # local d=1 00:07:22.917 14:23:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.917 14:23:14 version -- scripts/common.sh@355 -- # echo 1 00:07:22.917 14:23:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.917 14:23:14 version -- scripts/common.sh@366 -- # decimal 2 00:07:22.917 14:23:14 version -- scripts/common.sh@353 -- # local d=2 00:07:22.917 14:23:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.917 14:23:14 version -- scripts/common.sh@355 -- # echo 2 00:07:22.917 14:23:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.917 14:23:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.917 14:23:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.917 14:23:14 version -- scripts/common.sh@368 -- # return 0 00:07:22.917 14:23:14 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.917 14:23:14 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.917 --rc genhtml_branch_coverage=1 00:07:22.917 --rc genhtml_function_coverage=1 00:07:22.917 --rc genhtml_legend=1 00:07:22.918 --rc geninfo_all_blocks=1 00:07:22.918 --rc geninfo_unexecuted_blocks=1 00:07:22.918 00:07:22.918 ' 00:07:22.918 14:23:14 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.918 --rc genhtml_branch_coverage=1 00:07:22.918 --rc genhtml_function_coverage=1 00:07:22.918 --rc genhtml_legend=1 00:07:22.918 --rc geninfo_all_blocks=1 00:07:22.918 --rc geninfo_unexecuted_blocks=1 00:07:22.918 00:07:22.918 ' 00:07:22.918 14:23:14 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.918 --rc genhtml_branch_coverage=1 00:07:22.918 --rc genhtml_function_coverage=1 00:07:22.918 --rc genhtml_legend=1 00:07:22.918 --rc geninfo_all_blocks=1 00:07:22.918 --rc geninfo_unexecuted_blocks=1 00:07:22.918 00:07:22.918 ' 00:07:22.918 14:23:14 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.918 --rc genhtml_branch_coverage=1 00:07:22.918 --rc genhtml_function_coverage=1 00:07:22.918 --rc genhtml_legend=1 00:07:22.918 --rc geninfo_all_blocks=1 00:07:22.918 --rc geninfo_unexecuted_blocks=1 00:07:22.918 00:07:22.918 ' 00:07:22.918 14:23:14 version -- app/version.sh@17 -- # get_header_version major 00:07:22.918 14:23:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # cut -f2 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.918 14:23:14 version -- app/version.sh@17 -- # major=24 00:07:22.918 14:23:14 version -- app/version.sh@18 -- # get_header_version minor 00:07:22.918 14:23:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # cut -f2 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.918 14:23:14 version -- app/version.sh@18 -- # minor=9 00:07:22.918 14:23:14 version -- app/version.sh@19 -- # get_header_version patch 00:07:22.918 14:23:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # cut -f2 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.918 14:23:14 version -- app/version.sh@19 -- # patch=1 00:07:22.918 14:23:14 version -- app/version.sh@20 -- # get_header_version suffix 00:07:22.918 14:23:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # cut -f2 00:07:22.918 14:23:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.918 14:23:14 version -- app/version.sh@20 -- # suffix=-pre 00:07:22.918 14:23:14 version -- app/version.sh@22 -- # version=24.9 00:07:22.918 14:23:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:22.918 14:23:14 version -- app/version.sh@25 -- # version=24.9.1 00:07:22.918 14:23:14 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:22.918 14:23:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.918 14:23:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:22.918 14:23:14 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:22.918 14:23:14 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:22.918 00:07:22.918 real 0m0.200s 00:07:22.918 user 0m0.130s 00:07:22.918 sys 0m0.096s 00:07:22.918 14:23:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.918 14:23:14 version -- common/autotest_common.sh@10 -- # set +x 00:07:22.918 ************************************ 00:07:22.918 END TEST version 00:07:22.918 ************************************ 00:07:22.918 14:23:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:22.918 14:23:14 -- spdk/autotest.sh@194 -- # uname -s 00:07:22.918 14:23:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:22.918 14:23:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:22.918 14:23:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:22.918 14:23:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:22.918 14:23:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.918 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:22.918 14:23:14 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:22.918 14:23:14 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:22.918 14:23:14 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.918 14:23:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.918 14:23:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.918 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:22.918 ************************************ 00:07:22.918 START TEST nvmf_tcp 00:07:22.918 ************************************ 00:07:22.918 14:23:14 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.918 * Looking for test storage... 00:07:22.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:22.918 14:23:14 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.918 14:23:14 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.918 14:23:14 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.199 14:23:14 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:23.199 14:23:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:23.199 14:23:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.199 14:23:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.199 ************************************ 00:07:23.199 START TEST nvmf_target_core 00:07:23.199 ************************************ 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:23.199 * Looking for test storage... 00:07:23.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.199 --rc genhtml_legend=1 00:07:23.199 --rc geninfo_all_blocks=1 00:07:23.199 --rc geninfo_unexecuted_blocks=1 00:07:23.199 00:07:23.199 ' 00:07:23.199 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.199 --rc genhtml_branch_coverage=1 00:07:23.199 --rc genhtml_function_coverage=1 00:07:23.200 --rc genhtml_legend=1 00:07:23.200 --rc geninfo_all_blocks=1 00:07:23.200 --rc geninfo_unexecuted_blocks=1 00:07:23.200 00:07:23.200 ' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.200 ************************************ 00:07:23.200 START TEST nvmf_abort 00:07:23.200 ************************************ 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.200 * Looking for test storage... 00:07:23.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.200 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.459 --rc genhtml_branch_coverage=1 00:07:23.459 --rc genhtml_function_coverage=1 00:07:23.459 --rc genhtml_legend=1 00:07:23.459 --rc geninfo_all_blocks=1 00:07:23.459 --rc geninfo_unexecuted_blocks=1 00:07:23.459 00:07:23.459 ' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.459 --rc genhtml_branch_coverage=1 00:07:23.459 --rc genhtml_function_coverage=1 00:07:23.459 --rc genhtml_legend=1 00:07:23.459 --rc geninfo_all_blocks=1 00:07:23.459 --rc geninfo_unexecuted_blocks=1 00:07:23.459 00:07:23.459 ' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.459 --rc genhtml_branch_coverage=1 00:07:23.459 --rc genhtml_function_coverage=1 00:07:23.459 --rc genhtml_legend=1 00:07:23.459 --rc geninfo_all_blocks=1 00:07:23.459 --rc geninfo_unexecuted_blocks=1 00:07:23.459 00:07:23.459 ' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.459 --rc genhtml_branch_coverage=1 00:07:23.459 --rc genhtml_function_coverage=1 00:07:23.459 --rc genhtml_legend=1 00:07:23.459 --rc geninfo_all_blocks=1 00:07:23.459 --rc geninfo_unexecuted_blocks=1 00:07:23.459 00:07:23.459 ' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.459 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.460 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:25.361 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.362 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:07:25.621 00:07:25.621 --- 10.0.0.2 ping statistics --- 00:07:25.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.621 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:07:25.621 00:07:25.621 --- 10.0.0.1 ping statistics --- 00:07:25.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.621 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=1247074 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 1247074 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1247074 ']' 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.621 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.621 [2024-11-02 14:23:17.540875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.621 [2024-11-02 14:23:17.540962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.621 [2024-11-02 14:23:17.609633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.879 [2024-11-02 14:23:17.702726] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.879 [2024-11-02 14:23:17.702788] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.879 [2024-11-02 14:23:17.702804] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.879 [2024-11-02 14:23:17.702817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.879 [2024-11-02 14:23:17.702829] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.879 [2024-11-02 14:23:17.702939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.879 [2024-11-02 14:23:17.703027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.879 [2024-11-02 14:23:17.703030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.879 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.879 [2024-11-02 14:23:17.845090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 Malloc0 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 Delay0 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 [2024-11-02 14:23:17.917156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.880 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:26.138 [2024-11-02 14:23:18.022376] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:28.697 Initializing NVMe Controllers 00:07:28.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.697 controller IO queue size 128 less than required 00:07:28.698 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:28.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:28.698 Initialization complete. Launching workers. 00:07:28.698 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28639 00:07:28.698 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28700, failed to submit 62 00:07:28.698 success 28643, unsuccessful 57, failed 0 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.698 rmmod nvme_tcp 00:07:28.698 rmmod nvme_fabrics 00:07:28.698 rmmod nvme_keyring 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 1247074 ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1247074 ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247074' 00:07:28.698 killing process with pid 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1247074 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.698 14:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.602 00:07:30.602 real 0m7.413s 00:07:30.602 user 0m10.934s 00:07:30.602 sys 0m2.507s 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.602 ************************************ 00:07:30.602 END TEST nvmf_abort 00:07:30.602 ************************************ 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.602 ************************************ 00:07:30.602 START TEST nvmf_ns_hotplug_stress 00:07:30.602 ************************************ 00:07:30.602 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.860 * Looking for test storage... 00:07:30.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.860 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.860 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.861 --rc genhtml_branch_coverage=1 00:07:30.861 --rc genhtml_function_coverage=1 00:07:30.861 --rc genhtml_legend=1 00:07:30.861 --rc geninfo_all_blocks=1 00:07:30.861 --rc geninfo_unexecuted_blocks=1 00:07:30.861 00:07:30.861 ' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.861 --rc genhtml_branch_coverage=1 00:07:30.861 --rc genhtml_function_coverage=1 00:07:30.861 --rc genhtml_legend=1 00:07:30.861 --rc geninfo_all_blocks=1 00:07:30.861 --rc geninfo_unexecuted_blocks=1 00:07:30.861 00:07:30.861 ' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.861 --rc genhtml_branch_coverage=1 00:07:30.861 --rc genhtml_function_coverage=1 00:07:30.861 --rc genhtml_legend=1 00:07:30.861 --rc geninfo_all_blocks=1 00:07:30.861 --rc geninfo_unexecuted_blocks=1 00:07:30.861 00:07:30.861 ' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.861 --rc genhtml_branch_coverage=1 00:07:30.861 --rc genhtml_function_coverage=1 00:07:30.861 --rc genhtml_legend=1 00:07:30.861 --rc geninfo_all_blocks=1 00:07:30.861 --rc geninfo_unexecuted_blocks=1 00:07:30.861 00:07:30.861 ' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.861 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.862 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.763 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.764 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:07:33.022 00:07:33.022 --- 10.0.0.2 ping statistics --- 00:07:33.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.022 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:07:33.022 00:07:33.022 --- 10.0.0.1 ping statistics --- 00:07:33.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.022 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=1249423 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 1249423 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1249423 ']' 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.022 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.022 [2024-11-02 14:23:24.946974] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.022 [2024-11-02 14:23:24.947050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.022 [2024-11-02 14:23:25.015974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.280 [2024-11-02 14:23:25.110214] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.280 [2024-11-02 14:23:25.110275] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.280 [2024-11-02 14:23:25.110293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.280 [2024-11-02 14:23:25.110307] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.280 [2024-11-02 14:23:25.110318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.280 [2024-11-02 14:23:25.110418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.280 [2024-11-02 14:23:25.110471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.280 [2024-11-02 14:23:25.110474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:33.280 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.538 [2024-11-02 14:23:25.492189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.538 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.795 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.052 [2024-11-02 14:23:26.056176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.052 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.309 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:34.568 Malloc0 00:07:34.825 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.825 Delay0 00:07:35.082 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.340 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:35.597 NULL1 00:07:35.597 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:35.854 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1249734 00:07:35.854 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:35.855 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:35.855 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.226 Read completed with error (sct=0, sc=11) 00:07:37.226 14:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.227 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:37.227 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:37.484 true 00:07:37.484 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:37.484 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.418 14:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.676 14:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:38.676 14:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:38.933 true 00:07:38.933 14:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:38.933 14:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.191 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.449 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:39.449 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:39.706 true 00:07:39.706 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:39.706 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.638 14:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.638 14:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:40.638 14:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:40.896 true 00:07:40.896 14:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:40.896 14:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.461 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.461 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:41.461 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:41.718 true 00:07:41.718 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:41.718 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.976 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.540 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:42.540 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:42.540 true 00:07:42.540 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:42.540 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.473 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.730 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:43.730 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:43.987 true 00:07:43.987 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:43.987 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.245 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.502 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:44.502 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:44.759 true 00:07:44.759 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:44.759 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.017 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.274 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:45.274 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.532 true 00:07:45.532 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:45.532 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.903 14:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.903 14:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.903 14:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:47.161 true 00:07:47.161 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:47.161 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.418 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.676 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:47.676 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.933 true 00:07:47.933 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:47.933 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.190 14:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.447 14:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:48.447 14:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:48.704 true 00:07:48.962 14:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:48.962 14:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.968 14:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.968 14:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:49.968 14:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:50.225 true 00:07:50.225 14:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:50.225 14:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.483 14:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.741 14:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:50.741 14:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:50.998 true 00:07:51.256 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:51.256 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.513 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.770 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:51.770 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:52.028 true 00:07:52.028 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:52.028 14:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.961 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.218 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:53.218 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:53.476 true 00:07:53.476 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:53.476 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.733 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.991 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:53.991 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:54.248 true 00:07:54.248 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:54.248 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.180 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.180 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:55.180 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:55.437 true 00:07:55.437 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:55.437 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.694 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.952 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:55.952 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:56.516 true 00:07:56.517 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:56.517 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.081 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.645 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:57.645 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:57.645 true 00:07:57.645 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:57.645 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.901 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.467 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:58.467 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:58.467 true 00:07:58.724 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:58.724 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.982 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.239 14:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:59.239 14:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:59.497 true 00:07:59.497 14:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:07:59.497 14:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.429 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.686 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:00.686 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:00.943 true 00:08:00.943 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:00.944 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.201 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.459 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:01.459 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:01.716 true 00:08:01.716 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:01.716 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.973 14:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.230 14:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:02.230 14:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:02.488 true 00:08:02.488 14:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:02.488 14:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.420 14:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.678 14:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:03.678 14:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:03.935 true 00:08:03.935 14:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:03.935 14:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.192 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.450 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:04.450 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:04.707 true 00:08:04.707 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:04.707 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.965 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.222 14:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:05.222 14:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:05.479 true 00:08:05.479 14:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:05.479 14:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.412 Initializing NVMe Controllers 00:08:06.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.412 Controller IO queue size 128, less than required. 00:08:06.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.412 Controller IO queue size 128, less than required. 00:08:06.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:06.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:06.412 Initialization complete. Launching workers. 00:08:06.412 ======================================================== 00:08:06.412 Latency(us) 00:08:06.412 Device Information : IOPS MiB/s Average min max 00:08:06.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 531.73 0.26 105937.91 2787.10 1033072.93 00:08:06.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9092.78 4.44 14077.35 2399.86 446860.72 00:08:06.412 ======================================================== 00:08:06.412 Total : 9624.52 4.70 19152.43 2399.86 1033072.93 00:08:06.412 00:08:06.412 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.670 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:06.670 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:06.927 true 00:08:06.927 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1249734 00:08:06.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1249734) - No such process 00:08:06.927 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1249734 00:08:06.927 14:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.185 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.442 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:07.442 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:07.442 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:07.443 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.443 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:07.703 null0 00:08:07.703 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.703 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.703 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:07.960 null1 00:08:08.217 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.217 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.217 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:08.475 null2 00:08:08.475 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.475 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.475 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:08.733 null3 00:08:08.733 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.733 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.733 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:08.991 null4 00:08:08.991 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.991 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.991 14:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:09.249 null5 00:08:09.249 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.249 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.249 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:09.506 null6 00:08:09.506 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.506 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.506 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:09.765 null7 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:09.765 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1254040 1254041 1254043 1254045 1254047 1254049 1254051 1254053 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 14:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.024 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.590 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.590 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.590 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.590 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.591 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.849 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.111 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.112 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.371 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.629 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.887 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.145 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.403 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.403 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.403 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.661 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.919 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.177 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.178 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.178 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.501 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.766 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.024 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.282 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.540 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.798 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.056 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.314 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.572 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.830 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.830 rmmod nvme_tcp 00:08:15.830 rmmod nvme_fabrics 00:08:15.830 rmmod nvme_keyring 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 1249423 ']' 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 1249423 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1249423 ']' 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1249423 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249423 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249423' 00:08:16.088 killing process with pid 1249423 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1249423 00:08:16.088 14:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1249423 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.347 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.250 00:08:18.250 real 0m47.603s 00:08:18.250 user 3m42.353s 00:08:18.250 sys 0m15.550s 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.250 ************************************ 00:08:18.250 END TEST nvmf_ns_hotplug_stress 00:08:18.250 ************************************ 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.250 ************************************ 00:08:18.250 START TEST nvmf_delete_subsystem 00:08:18.250 ************************************ 00:08:18.250 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:18.508 * Looking for test storage... 00:08:18.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.508 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.508 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.508 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:18.508 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:18.508 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:18.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.509 --rc genhtml_branch_coverage=1 00:08:18.509 --rc genhtml_function_coverage=1 00:08:18.509 --rc genhtml_legend=1 00:08:18.509 --rc geninfo_all_blocks=1 00:08:18.509 --rc geninfo_unexecuted_blocks=1 00:08:18.509 00:08:18.509 ' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:18.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.509 --rc genhtml_branch_coverage=1 00:08:18.509 --rc genhtml_function_coverage=1 00:08:18.509 --rc genhtml_legend=1 00:08:18.509 --rc geninfo_all_blocks=1 00:08:18.509 --rc geninfo_unexecuted_blocks=1 00:08:18.509 00:08:18.509 ' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:18.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.509 --rc genhtml_branch_coverage=1 00:08:18.509 --rc genhtml_function_coverage=1 00:08:18.509 --rc genhtml_legend=1 00:08:18.509 --rc geninfo_all_blocks=1 00:08:18.509 --rc geninfo_unexecuted_blocks=1 00:08:18.509 00:08:18.509 ' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:18.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.509 --rc genhtml_branch_coverage=1 00:08:18.509 --rc genhtml_function_coverage=1 00:08:18.509 --rc genhtml_legend=1 00:08:18.509 --rc geninfo_all_blocks=1 00:08:18.509 --rc geninfo_unexecuted_blocks=1 00:08:18.509 00:08:18.509 ' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:18.509 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.510 14:24:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.041 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.041 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:21.042 00:08:21.042 --- 10.0.0.2 ping statistics --- 00:08:21.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.042 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:21.042 00:08:21.042 --- 10.0.0.1 ping statistics --- 00:08:21.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.042 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=1257454 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 1257454 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1257454 ']' 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.042 14:24:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.042 [2024-11-02 14:24:12.784919] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.042 [2024-11-02 14:24:12.785021] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.042 [2024-11-02 14:24:12.850070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.042 [2024-11-02 14:24:12.939280] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.042 [2024-11-02 14:24:12.939347] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.042 [2024-11-02 14:24:12.939375] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.042 [2024-11-02 14:24:12.939387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.042 [2024-11-02 14:24:12.939397] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.042 [2024-11-02 14:24:12.939459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.042 [2024-11-02 14:24:12.939464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.042 [2024-11-02 14:24:13.084161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.042 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 [2024-11-02 14:24:13.100401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 NULL1 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 Delay0 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1257479 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.300 14:24:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:21.300 [2024-11-02 14:24:13.175272] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:23.196 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.196 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.196 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 [2024-11-02 14:24:15.351995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1672ed0 is same with the state(6) to be set 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Write completed with error (sct=0, sc=8) 00:08:23.455 starting I/O failed: -6 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.455 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 starting I/O failed: -6 00:08:23.456 [2024-11-02 14:24:15.353018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9178000c00 is same with the state(6) to be set 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:23.456 Write completed with error (sct=0, sc=8) 00:08:23.456 Read completed with error (sct=0, sc=8) 00:08:24.389 [2024-11-02 14:24:16.319508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670d00 is same with the state(6) to be set 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 [2024-11-02 14:24:16.353554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16735c0 is same with the state(6) to be set 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 [2024-11-02 14:24:16.354999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f917800cfe0 is same with the state(6) to be set 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 [2024-11-02 14:24:16.355575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f917800d640 is same with the state(6) to be set 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Write completed with error (sct=0, sc=8) 00:08:24.389 Read completed with error (sct=0, sc=8) 00:08:24.389 [2024-11-02 14:24:16.356013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16730b0 is same with the state(6) to be set 00:08:24.389 Initializing NVMe Controllers 00:08:24.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:24.389 Controller IO queue size 128, less than required. 00:08:24.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:24.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:24.389 Initialization complete. Launching workers. 00:08:24.389 ======================================================== 00:08:24.389 Latency(us) 00:08:24.389 Device Information : IOPS MiB/s Average min max 00:08:24.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.32 0.08 899207.28 522.83 1011387.41 00:08:24.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.87 0.08 990518.66 384.65 2003250.76 00:08:24.389 ======================================================== 00:08:24.389 Total : 330.19 0.16 943970.46 384.65 2003250.76 00:08:24.389 00:08:24.389 [2024-11-02 14:24:16.356884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1670d00 (9): Bad file descriptor 00:08:24.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:24.390 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.390 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:24.390 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1257479 00:08:24.390 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1257479 00:08:24.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1257479) - No such process 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1257479 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1257479 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1257479 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.955 [2024-11-02 14:24:16.883640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1257888 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.955 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.955 [2024-11-02 14:24:16.948627] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:25.521 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.521 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:25.521 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.086 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.086 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:26.086 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.651 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.651 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:26.651 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.908 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.908 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:26.908 14:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.473 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.473 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:27.473 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.044 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.045 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:28.045 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.045 Initializing NVMe Controllers 00:08:28.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.045 Controller IO queue size 128, less than required. 00:08:28.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:28.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:28.045 Initialization complete. Launching workers. 00:08:28.045 ======================================================== 00:08:28.045 Latency(us) 00:08:28.045 Device Information : IOPS MiB/s Average min max 00:08:28.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003775.83 1000261.64 1011949.23 00:08:28.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005247.64 1000278.26 1041066.76 00:08:28.045 ======================================================== 00:08:28.045 Total : 256.00 0.12 1004511.73 1000261.64 1041066.76 00:08:28.045 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257888 00:08:28.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1257888) - No such process 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1257888 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.611 rmmod nvme_tcp 00:08:28.611 rmmod nvme_fabrics 00:08:28.611 rmmod nvme_keyring 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 1257454 ']' 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 1257454 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1257454 ']' 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1257454 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1257454 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1257454' 00:08:28.611 killing process with pid 1257454 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1257454 00:08:28.611 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1257454 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.870 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.774 00:08:30.774 real 0m12.502s 00:08:30.774 user 0m28.041s 00:08:30.774 sys 0m2.973s 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.774 ************************************ 00:08:30.774 END TEST nvmf_delete_subsystem 00:08:30.774 ************************************ 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.774 14:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.032 ************************************ 00:08:31.032 START TEST nvmf_host_management 00:08:31.032 ************************************ 00:08:31.032 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:31.032 * Looking for test storage... 00:08:31.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.032 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.032 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.033 --rc genhtml_branch_coverage=1 00:08:31.033 --rc genhtml_function_coverage=1 00:08:31.033 --rc genhtml_legend=1 00:08:31.033 --rc geninfo_all_blocks=1 00:08:31.033 --rc geninfo_unexecuted_blocks=1 00:08:31.033 00:08:31.033 ' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.033 --rc genhtml_branch_coverage=1 00:08:31.033 --rc genhtml_function_coverage=1 00:08:31.033 --rc genhtml_legend=1 00:08:31.033 --rc geninfo_all_blocks=1 00:08:31.033 --rc geninfo_unexecuted_blocks=1 00:08:31.033 00:08:31.033 ' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.033 --rc genhtml_branch_coverage=1 00:08:31.033 --rc genhtml_function_coverage=1 00:08:31.033 --rc genhtml_legend=1 00:08:31.033 --rc geninfo_all_blocks=1 00:08:31.033 --rc geninfo_unexecuted_blocks=1 00:08:31.033 00:08:31.033 ' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.033 --rc genhtml_branch_coverage=1 00:08:31.033 --rc genhtml_function_coverage=1 00:08:31.033 --rc genhtml_legend=1 00:08:31.033 --rc geninfo_all_blocks=1 00:08:31.033 --rc geninfo_unexecuted_blocks=1 00:08:31.033 00:08:31.033 ' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.033 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:31.034 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.034 14:24:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.564 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:33.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:33.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:33.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:33.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:08:33.565 00:08:33.565 --- 10.0.0.2 ping statistics --- 00:08:33.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.565 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:33.565 00:08:33.565 --- 10.0.0.1 ping statistics --- 00:08:33.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.565 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=1260363 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 1260363 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1260363 ']' 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.565 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.566 [2024-11-02 14:24:25.310091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:33.566 [2024-11-02 14:24:25.310183] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.566 [2024-11-02 14:24:25.375826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.566 [2024-11-02 14:24:25.468494] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.566 [2024-11-02 14:24:25.468555] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.566 [2024-11-02 14:24:25.468584] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.566 [2024-11-02 14:24:25.468596] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.566 [2024-11-02 14:24:25.468605] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.566 [2024-11-02 14:24:25.468741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.566 [2024-11-02 14:24:25.468804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.566 [2024-11-02 14:24:25.469059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.566 [2024-11-02 14:24:25.469063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.566 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.566 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:33.566 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:33.566 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.566 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 [2024-11-02 14:24:25.630290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 Malloc0 00:08:33.823 [2024-11-02 14:24:25.695284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.823 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1260408 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1260408 /var/tmp/bdevperf.sock 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1260408 ']' 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:33.824 { 00:08:33.824 "params": { 00:08:33.824 "name": "Nvme$subsystem", 00:08:33.824 "trtype": "$TEST_TRANSPORT", 00:08:33.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.824 "adrfam": "ipv4", 00:08:33.824 "trsvcid": "$NVMF_PORT", 00:08:33.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.824 "hdgst": ${hdgst:-false}, 00:08:33.824 "ddgst": ${ddgst:-false} 00:08:33.824 }, 00:08:33.824 "method": "bdev_nvme_attach_controller" 00:08:33.824 } 00:08:33.824 EOF 00:08:33.824 )") 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:33.824 14:24:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:33.824 "params": { 00:08:33.824 "name": "Nvme0", 00:08:33.824 "trtype": "tcp", 00:08:33.824 "traddr": "10.0.0.2", 00:08:33.824 "adrfam": "ipv4", 00:08:33.824 "trsvcid": "4420", 00:08:33.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:33.824 "hdgst": false, 00:08:33.824 "ddgst": false 00:08:33.824 }, 00:08:33.824 "method": "bdev_nvme_attach_controller" 00:08:33.824 }' 00:08:33.824 [2024-11-02 14:24:25.775575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:33.824 [2024-11-02 14:24:25.775654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260408 ] 00:08:33.824 [2024-11-02 14:24:25.837626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.081 [2024-11-02 14:24:25.926698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.081 Running I/O for 10 seconds... 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:34.339 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:34.340 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.600 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 [2024-11-02 14:24:26.506554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.506995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.600 [2024-11-02 14:24:26.507165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.601 [2024-11-02 14:24:26.507177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee8b0 is same with the state(6) to be set 00:08:34.601 [2024-11-02 14:24:26.507378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.507976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.507990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.601 [2024-11-02 14:24:26.508538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.601 [2024-11-02 14:24:26.508558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.508977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.508993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.602 [2024-11-02 14:24:26.509317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.602 [2024-11-02 14:24:26.509400] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8fce10 was disconnected and freed. reset controller. 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.602 [2024-11-02 14:24:26.510558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.602 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:34.602 00:08:34.602 Latency(us) 00:08:34.602 [2024-11-02T13:24:26.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.602 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:34.602 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:34.602 Verification LBA range: start 0x0 length 0x400 00:08:34.602 Nvme0n1 : 0.40 1449.42 90.59 161.05 0.00 38618.45 4660.34 35535.08 00:08:34.602 [2024-11-02T13:24:26.657Z] =================================================================================================================== 00:08:34.602 [2024-11-02T13:24:26.657Z] Total : 1449.42 90.59 161.05 0.00 38618.45 4660.34 35535.08 00:08:34.602 [2024-11-02 14:24:26.512511] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.602 [2024-11-02 14:24:26.512540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e4090 (9): Bad file descriptor 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.602 14:24:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:34.602 [2024-11-02 14:24:26.604449] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1260408 00:08:35.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1260408) - No such process 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:35.536 { 00:08:35.536 "params": { 00:08:35.536 "name": "Nvme$subsystem", 00:08:35.536 "trtype": "$TEST_TRANSPORT", 00:08:35.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.536 "adrfam": "ipv4", 00:08:35.536 "trsvcid": "$NVMF_PORT", 00:08:35.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.536 "hdgst": ${hdgst:-false}, 00:08:35.536 "ddgst": ${ddgst:-false} 00:08:35.536 }, 00:08:35.536 "method": "bdev_nvme_attach_controller" 00:08:35.536 } 00:08:35.536 EOF 00:08:35.536 )") 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:35.536 14:24:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:35.536 "params": { 00:08:35.536 "name": "Nvme0", 00:08:35.536 "trtype": "tcp", 00:08:35.536 "traddr": "10.0.0.2", 00:08:35.536 "adrfam": "ipv4", 00:08:35.536 "trsvcid": "4420", 00:08:35.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:35.536 "hdgst": false, 00:08:35.536 "ddgst": false 00:08:35.536 }, 00:08:35.536 "method": "bdev_nvme_attach_controller" 00:08:35.536 }' 00:08:35.536 [2024-11-02 14:24:27.566554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:35.536 [2024-11-02 14:24:27.566630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260685 ] 00:08:35.794 [2024-11-02 14:24:27.627907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.794 [2024-11-02 14:24:27.715656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.053 Running I/O for 1 seconds... 00:08:36.986 1536.00 IOPS, 96.00 MiB/s 00:08:36.986 Latency(us) 00:08:36.986 [2024-11-02T13:24:29.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:36.986 Verification LBA range: start 0x0 length 0x400 00:08:36.986 Nvme0n1 : 1.03 1558.59 97.41 0.00 0.00 40418.60 8883.77 33399.09 00:08:36.986 [2024-11-02T13:24:29.041Z] =================================================================================================================== 00:08:36.986 [2024-11-02T13:24:29.041Z] Total : 1558.59 97.41 0.00 0.00 40418.60 8883.77 33399.09 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.244 rmmod nvme_tcp 00:08:37.244 rmmod nvme_fabrics 00:08:37.244 rmmod nvme_keyring 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:37.244 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 1260363 ']' 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 1260363 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1260363 ']' 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1260363 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.245 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260363 00:08:37.503 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:37.503 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:37.503 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260363' 00:08:37.503 killing process with pid 1260363 00:08:37.503 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1260363 00:08:37.503 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1260363 00:08:37.503 [2024-11-02 14:24:29.540381] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.760 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:39.664 00:08:39.664 real 0m8.782s 00:08:39.664 user 0m19.257s 00:08:39.664 sys 0m2.843s 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.664 ************************************ 00:08:39.664 END TEST nvmf_host_management 00:08:39.664 ************************************ 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.664 ************************************ 00:08:39.664 START TEST nvmf_lvol 00:08:39.664 ************************************ 00:08:39.664 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.923 * Looking for test storage... 00:08:39.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.923 --rc genhtml_branch_coverage=1 00:08:39.923 --rc genhtml_function_coverage=1 00:08:39.923 --rc genhtml_legend=1 00:08:39.923 --rc geninfo_all_blocks=1 00:08:39.923 --rc geninfo_unexecuted_blocks=1 00:08:39.923 00:08:39.923 ' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.923 --rc genhtml_branch_coverage=1 00:08:39.923 --rc genhtml_function_coverage=1 00:08:39.923 --rc genhtml_legend=1 00:08:39.923 --rc geninfo_all_blocks=1 00:08:39.923 --rc geninfo_unexecuted_blocks=1 00:08:39.923 00:08:39.923 ' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.923 --rc genhtml_branch_coverage=1 00:08:39.923 --rc genhtml_function_coverage=1 00:08:39.923 --rc genhtml_legend=1 00:08:39.923 --rc geninfo_all_blocks=1 00:08:39.923 --rc geninfo_unexecuted_blocks=1 00:08:39.923 00:08:39.923 ' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.923 --rc genhtml_branch_coverage=1 00:08:39.923 --rc genhtml_function_coverage=1 00:08:39.923 --rc genhtml_legend=1 00:08:39.923 --rc geninfo_all_blocks=1 00:08:39.923 --rc geninfo_unexecuted_blocks=1 00:08:39.923 00:08:39.923 ' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.923 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.924 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:42.513 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.514 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:08:42.514 00:08:42.514 --- 10.0.0.2 ping statistics --- 00:08:42.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.514 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:42.514 00:08:42.514 --- 10.0.0.1 ping statistics --- 00:08:42.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.514 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=1262809 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 1262809 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1262809 ']' 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.514 [2024-11-02 14:24:34.157186] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.514 [2024-11-02 14:24:34.157286] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.514 [2024-11-02 14:24:34.225603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.514 [2024-11-02 14:24:34.315888] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.514 [2024-11-02 14:24:34.315955] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.514 [2024-11-02 14:24:34.315971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.514 [2024-11-02 14:24:34.315986] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.514 [2024-11-02 14:24:34.315997] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.514 [2024-11-02 14:24:34.316086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.514 [2024-11-02 14:24:34.316141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.514 [2024-11-02 14:24:34.316159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.514 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.772 [2024-11-02 14:24:34.708608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.772 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.030 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:43.030 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.288 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:43.288 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:43.546 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:44.111 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fbb4f657-8846-4d5d-b99e-427a77c770ba 00:08:44.111 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbb4f657-8846-4d5d-b99e-427a77c770ba lvol 20 00:08:44.369 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0d6d6aeb-49ae-41c1-8c38-7d4699f6a0ae 00:08:44.369 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.626 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d6d6aeb-49ae-41c1-8c38-7d4699f6a0ae 00:08:44.884 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.141 [2024-11-02 14:24:36.968672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.141 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.398 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1263218 00:08:45.398 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:45.398 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:46.333 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0d6d6aeb-49ae-41c1-8c38-7d4699f6a0ae MY_SNAPSHOT 00:08:46.591 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d75e6241-2ae6-44b0-a286-c8b5e236dc10 00:08:46.591 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0d6d6aeb-49ae-41c1-8c38-7d4699f6a0ae 30 00:08:47.159 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d75e6241-2ae6-44b0-a286-c8b5e236dc10 MY_CLONE 00:08:47.417 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b25ce5a0-7512-45e2-98ee-b807fb316b69 00:08:47.417 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b25ce5a0-7512-45e2-98ee-b807fb316b69 00:08:47.984 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1263218 00:08:56.094 Initializing NVMe Controllers 00:08:56.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:56.094 Controller IO queue size 128, less than required. 00:08:56.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:56.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:56.094 Initialization complete. Launching workers. 00:08:56.094 ======================================================== 00:08:56.094 Latency(us) 00:08:56.094 Device Information : IOPS MiB/s Average min max 00:08:56.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10422.10 40.71 12282.36 1819.62 88446.04 00:08:56.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10227.40 39.95 12524.03 2453.49 69162.14 00:08:56.094 ======================================================== 00:08:56.094 Total : 20649.50 80.66 12402.06 1819.62 88446.04 00:08:56.094 00:08:56.094 14:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.094 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d6d6aeb-49ae-41c1-8c38-7d4699f6a0ae 00:08:56.352 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbb4f657-8846-4d5d-b99e-427a77c770ba 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.610 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.610 rmmod nvme_tcp 00:08:56.611 rmmod nvme_fabrics 00:08:56.611 rmmod nvme_keyring 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 1262809 ']' 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 1262809 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1262809 ']' 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1262809 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1262809 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1262809' 00:08:56.611 killing process with pid 1262809 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1262809 00:08:56.611 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1262809 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:56.869 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:57.128 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.128 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.128 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.128 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.128 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.038 00:08:59.038 real 0m19.290s 00:08:59.038 user 1m5.486s 00:08:59.038 sys 0m5.555s 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:59.038 ************************************ 00:08:59.038 END TEST nvmf_lvol 00:08:59.038 ************************************ 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.038 14:24:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.038 ************************************ 00:08:59.038 START TEST nvmf_lvs_grow 00:08:59.038 ************************************ 00:08:59.038 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.038 * Looking for test storage... 00:08:59.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.038 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:59.038 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:59.039 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:59.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.298 --rc genhtml_branch_coverage=1 00:08:59.298 --rc genhtml_function_coverage=1 00:08:59.298 --rc genhtml_legend=1 00:08:59.298 --rc geninfo_all_blocks=1 00:08:59.298 --rc geninfo_unexecuted_blocks=1 00:08:59.298 00:08:59.298 ' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:59.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.298 --rc genhtml_branch_coverage=1 00:08:59.298 --rc genhtml_function_coverage=1 00:08:59.298 --rc genhtml_legend=1 00:08:59.298 --rc geninfo_all_blocks=1 00:08:59.298 --rc geninfo_unexecuted_blocks=1 00:08:59.298 00:08:59.298 ' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:59.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.298 --rc genhtml_branch_coverage=1 00:08:59.298 --rc genhtml_function_coverage=1 00:08:59.298 --rc genhtml_legend=1 00:08:59.298 --rc geninfo_all_blocks=1 00:08:59.298 --rc geninfo_unexecuted_blocks=1 00:08:59.298 00:08:59.298 ' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:59.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.298 --rc genhtml_branch_coverage=1 00:08:59.298 --rc genhtml_function_coverage=1 00:08:59.298 --rc genhtml_legend=1 00:08:59.298 --rc geninfo_all_blocks=1 00:08:59.298 --rc geninfo_unexecuted_blocks=1 00:08:59.298 00:08:59.298 ' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.298 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.299 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:01.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:01.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:01.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:01.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.198 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.199 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:09:01.457 00:09:01.457 --- 10.0.0.2 ping statistics --- 00:09:01.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.457 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:09:01.457 00:09:01.457 --- 10.0.0.1 ping statistics --- 00:09:01.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.457 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=1266502 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 1266502 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1266502 ']' 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.457 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.457 [2024-11-02 14:24:53.344541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.457 [2024-11-02 14:24:53.344652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.457 [2024-11-02 14:24:53.408817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.457 [2024-11-02 14:24:53.496261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.457 [2024-11-02 14:24:53.496320] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.457 [2024-11-02 14:24:53.496349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.457 [2024-11-02 14:24:53.496360] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.457 [2024-11-02 14:24:53.496370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.457 [2024-11-02 14:24:53.496398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.715 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:01.973 [2024-11-02 14:24:53.879819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.973 ************************************ 00:09:01.973 START TEST lvs_grow_clean 00:09:01.973 ************************************ 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.973 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.231 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:02.231 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:02.489 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:02.489 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:02.489 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:02.747 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:02.747 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:02.747 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 lvol 150 00:09:03.005 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9b5bdf16-85a9-43d5-b625-9efc45d4b47b 00:09:03.005 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.005 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:03.263 [2024-11-02 14:24:55.299740] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:03.263 [2024-11-02 14:24:55.299840] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:03.263 true 00:09:03.263 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:03.263 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:03.829 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:03.829 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.087 14:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b5bdf16-85a9-43d5-b625-9efc45d4b47b 00:09:04.345 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:04.603 [2024-11-02 14:24:56.403180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.603 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1266940 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1266940 /var/tmp/bdevperf.sock 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1266940 ']' 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.861 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:04.861 [2024-11-02 14:24:56.738864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:04.862 [2024-11-02 14:24:56.738944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266940 ] 00:09:04.862 [2024-11-02 14:24:56.808025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.862 [2024-11-02 14:24:56.901792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.120 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.120 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:05.120 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.685 Nvme0n1 00:09:05.685 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.685 [ 00:09:05.685 { 00:09:05.685 "name": "Nvme0n1", 00:09:05.685 "aliases": [ 00:09:05.685 "9b5bdf16-85a9-43d5-b625-9efc45d4b47b" 00:09:05.685 ], 00:09:05.685 "product_name": "NVMe disk", 00:09:05.685 "block_size": 4096, 00:09:05.685 "num_blocks": 38912, 00:09:05.685 "uuid": "9b5bdf16-85a9-43d5-b625-9efc45d4b47b", 00:09:05.685 "numa_id": 0, 00:09:05.685 "assigned_rate_limits": { 00:09:05.685 "rw_ios_per_sec": 0, 00:09:05.685 "rw_mbytes_per_sec": 0, 00:09:05.685 "r_mbytes_per_sec": 0, 00:09:05.685 "w_mbytes_per_sec": 0 00:09:05.685 }, 00:09:05.685 "claimed": false, 00:09:05.685 "zoned": false, 00:09:05.685 "supported_io_types": { 00:09:05.685 "read": true, 00:09:05.685 "write": true, 00:09:05.685 "unmap": true, 00:09:05.685 "flush": true, 00:09:05.685 "reset": true, 00:09:05.685 "nvme_admin": true, 00:09:05.685 "nvme_io": true, 00:09:05.685 "nvme_io_md": false, 00:09:05.685 "write_zeroes": true, 00:09:05.685 "zcopy": false, 00:09:05.685 "get_zone_info": false, 00:09:05.685 "zone_management": false, 00:09:05.685 "zone_append": false, 00:09:05.685 "compare": true, 00:09:05.685 "compare_and_write": true, 00:09:05.685 "abort": true, 00:09:05.685 "seek_hole": false, 00:09:05.685 "seek_data": false, 00:09:05.685 "copy": true, 00:09:05.685 "nvme_iov_md": false 00:09:05.685 }, 00:09:05.685 "memory_domains": [ 00:09:05.685 { 00:09:05.685 "dma_device_id": "system", 00:09:05.685 "dma_device_type": 1 00:09:05.685 } 00:09:05.685 ], 00:09:05.685 "driver_specific": { 00:09:05.685 "nvme": [ 00:09:05.685 { 00:09:05.685 "trid": { 00:09:05.685 "trtype": "TCP", 00:09:05.685 "adrfam": "IPv4", 00:09:05.685 "traddr": "10.0.0.2", 00:09:05.685 "trsvcid": "4420", 00:09:05.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.685 }, 00:09:05.685 "ctrlr_data": { 00:09:05.685 "cntlid": 1, 00:09:05.685 "vendor_id": "0x8086", 00:09:05.685 "model_number": "SPDK bdev Controller", 00:09:05.685 "serial_number": "SPDK0", 00:09:05.685 "firmware_revision": "24.09.1", 00:09:05.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.685 "oacs": { 00:09:05.685 "security": 0, 00:09:05.685 "format": 0, 00:09:05.685 "firmware": 0, 00:09:05.685 "ns_manage": 0 00:09:05.685 }, 00:09:05.685 "multi_ctrlr": true, 00:09:05.685 "ana_reporting": false 00:09:05.685 }, 00:09:05.685 "vs": { 00:09:05.685 "nvme_version": "1.3" 00:09:05.685 }, 00:09:05.685 "ns_data": { 00:09:05.685 "id": 1, 00:09:05.685 "can_share": true 00:09:05.685 } 00:09:05.685 } 00:09:05.685 ], 00:09:05.685 "mp_policy": "active_passive" 00:09:05.685 } 00:09:05.685 } 00:09:05.685 ] 00:09:05.685 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1267071 00:09:05.685 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:05.685 14:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.944 Running I/O for 10 seconds... 00:09:06.878 Latency(us) 00:09:06.878 [2024-11-02T13:24:58.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.878 Nvme0n1 : 1.00 14101.00 55.08 0.00 0.00 0.00 0.00 0.00 00:09:06.878 [2024-11-02T13:24:58.933Z] =================================================================================================================== 00:09:06.878 [2024-11-02T13:24:58.933Z] Total : 14101.00 55.08 0.00 0.00 0.00 0.00 0.00 00:09:06.878 00:09:07.812 14:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:07.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.812 Nvme0n1 : 2.00 14244.50 55.64 0.00 0.00 0.00 0.00 0.00 00:09:07.812 [2024-11-02T13:24:59.867Z] =================================================================================================================== 00:09:07.812 [2024-11-02T13:24:59.867Z] Total : 14244.50 55.64 0.00 0.00 0.00 0.00 0.00 00:09:07.812 00:09:08.069 true 00:09:08.070 14:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:08.070 14:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:08.328 14:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:08.328 14:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:08.328 14:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1267071 00:09:08.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.894 Nvme0n1 : 3.00 14285.67 55.80 0.00 0.00 0.00 0.00 0.00 00:09:08.894 [2024-11-02T13:25:00.949Z] =================================================================================================================== 00:09:08.894 [2024-11-02T13:25:00.949Z] Total : 14285.67 55.80 0.00 0.00 0.00 0.00 0.00 00:09:08.894 00:09:09.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.828 Nvme0n1 : 4.00 14352.25 56.06 0.00 0.00 0.00 0.00 0.00 00:09:09.828 [2024-11-02T13:25:01.883Z] =================================================================================================================== 00:09:09.828 [2024-11-02T13:25:01.883Z] Total : 14352.25 56.06 0.00 0.00 0.00 0.00 0.00 00:09:09.828 00:09:11.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.203 Nvme0n1 : 5.00 14431.40 56.37 0.00 0.00 0.00 0.00 0.00 00:09:11.203 [2024-11-02T13:25:03.258Z] =================================================================================================================== 00:09:11.203 [2024-11-02T13:25:03.258Z] Total : 14431.40 56.37 0.00 0.00 0.00 0.00 0.00 00:09:11.203 00:09:12.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.135 Nvme0n1 : 6.00 14463.00 56.50 0.00 0.00 0.00 0.00 0.00 00:09:12.135 [2024-11-02T13:25:04.190Z] =================================================================================================================== 00:09:12.135 [2024-11-02T13:25:04.190Z] Total : 14463.00 56.50 0.00 0.00 0.00 0.00 0.00 00:09:12.135 00:09:13.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.070 Nvme0n1 : 7.00 14502.71 56.65 0.00 0.00 0.00 0.00 0.00 00:09:13.070 [2024-11-02T13:25:05.125Z] =================================================================================================================== 00:09:13.070 [2024-11-02T13:25:05.125Z] Total : 14502.71 56.65 0.00 0.00 0.00 0.00 0.00 00:09:13.070 00:09:14.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.059 Nvme0n1 : 8.00 14541.00 56.80 0.00 0.00 0.00 0.00 0.00 00:09:14.059 [2024-11-02T13:25:06.114Z] =================================================================================================================== 00:09:14.059 [2024-11-02T13:25:06.114Z] Total : 14541.00 56.80 0.00 0.00 0.00 0.00 0.00 00:09:14.059 00:09:14.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.993 Nvme0n1 : 9.00 14571.22 56.92 0.00 0.00 0.00 0.00 0.00 00:09:14.993 [2024-11-02T13:25:07.048Z] =================================================================================================================== 00:09:14.993 [2024-11-02T13:25:07.048Z] Total : 14571.22 56.92 0.00 0.00 0.00 0.00 0.00 00:09:14.993 00:09:15.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.928 Nvme0n1 : 10.00 14613.50 57.08 0.00 0.00 0.00 0.00 0.00 00:09:15.928 [2024-11-02T13:25:07.983Z] =================================================================================================================== 00:09:15.928 [2024-11-02T13:25:07.983Z] Total : 14613.50 57.08 0.00 0.00 0.00 0.00 0.00 00:09:15.928 00:09:15.928 00:09:15.928 Latency(us) 00:09:15.928 [2024-11-02T13:25:07.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.928 Nvme0n1 : 10.01 14614.62 57.09 0.00 0.00 8753.71 5097.24 18641.35 00:09:15.928 [2024-11-02T13:25:07.983Z] =================================================================================================================== 00:09:15.928 [2024-11-02T13:25:07.983Z] Total : 14614.62 57.09 0.00 0.00 8753.71 5097.24 18641.35 00:09:15.928 { 00:09:15.928 "results": [ 00:09:15.928 { 00:09:15.928 "job": "Nvme0n1", 00:09:15.928 "core_mask": "0x2", 00:09:15.928 "workload": "randwrite", 00:09:15.928 "status": "finished", 00:09:15.928 "queue_depth": 128, 00:09:15.928 "io_size": 4096, 00:09:15.928 "runtime": 10.00799, 00:09:15.928 "iops": 14614.622916289884, 00:09:15.928 "mibps": 57.08837076675736, 00:09:15.928 "io_failed": 0, 00:09:15.928 "io_timeout": 0, 00:09:15.928 "avg_latency_us": 8753.708397653036, 00:09:15.928 "min_latency_us": 5097.2444444444445, 00:09:15.928 "max_latency_us": 18641.35111111111 00:09:15.928 } 00:09:15.928 ], 00:09:15.928 "core_count": 1 00:09:15.928 } 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1266940 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1266940 ']' 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1266940 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1266940 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1266940' 00:09:15.928 killing process with pid 1266940 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1266940 00:09:15.928 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.928 00:09:15.928 Latency(us) 00:09:15.928 [2024-11-02T13:25:07.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.928 [2024-11-02T13:25:07.983Z] =================================================================================================================== 00:09:15.928 [2024-11-02T13:25:07.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.928 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1266940 00:09:16.186 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.444 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.701 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:16.701 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:16.959 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:16.959 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:16.959 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.218 [2024-11-02 14:25:09.259226] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:17.476 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:17.742 request: 00:09:17.742 { 00:09:17.742 "uuid": "05074ea4-878b-4192-acdf-0c3ebc473bb9", 00:09:17.742 "method": "bdev_lvol_get_lvstores", 00:09:17.742 "req_id": 1 00:09:17.742 } 00:09:17.742 Got JSON-RPC error response 00:09:17.742 response: 00:09:17.742 { 00:09:17.742 "code": -19, 00:09:17.742 "message": "No such device" 00:09:17.742 } 00:09:17.742 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:17.742 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.742 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.742 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.742 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.000 aio_bdev 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9b5bdf16-85a9-43d5-b625-9efc45d4b47b 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9b5bdf16-85a9-43d5-b625-9efc45d4b47b 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.000 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.258 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9b5bdf16-85a9-43d5-b625-9efc45d4b47b -t 2000 00:09:18.516 [ 00:09:18.516 { 00:09:18.516 "name": "9b5bdf16-85a9-43d5-b625-9efc45d4b47b", 00:09:18.516 "aliases": [ 00:09:18.516 "lvs/lvol" 00:09:18.516 ], 00:09:18.516 "product_name": "Logical Volume", 00:09:18.516 "block_size": 4096, 00:09:18.516 "num_blocks": 38912, 00:09:18.516 "uuid": "9b5bdf16-85a9-43d5-b625-9efc45d4b47b", 00:09:18.516 "assigned_rate_limits": { 00:09:18.516 "rw_ios_per_sec": 0, 00:09:18.516 "rw_mbytes_per_sec": 0, 00:09:18.516 "r_mbytes_per_sec": 0, 00:09:18.516 "w_mbytes_per_sec": 0 00:09:18.516 }, 00:09:18.516 "claimed": false, 00:09:18.516 "zoned": false, 00:09:18.516 "supported_io_types": { 00:09:18.516 "read": true, 00:09:18.516 "write": true, 00:09:18.516 "unmap": true, 00:09:18.516 "flush": false, 00:09:18.516 "reset": true, 00:09:18.516 "nvme_admin": false, 00:09:18.516 "nvme_io": false, 00:09:18.516 "nvme_io_md": false, 00:09:18.516 "write_zeroes": true, 00:09:18.516 "zcopy": false, 00:09:18.516 "get_zone_info": false, 00:09:18.516 "zone_management": false, 00:09:18.516 "zone_append": false, 00:09:18.516 "compare": false, 00:09:18.516 "compare_and_write": false, 00:09:18.516 "abort": false, 00:09:18.516 "seek_hole": true, 00:09:18.516 "seek_data": true, 00:09:18.516 "copy": false, 00:09:18.516 "nvme_iov_md": false 00:09:18.516 }, 00:09:18.516 "driver_specific": { 00:09:18.516 "lvol": { 00:09:18.516 "lvol_store_uuid": "05074ea4-878b-4192-acdf-0c3ebc473bb9", 00:09:18.516 "base_bdev": "aio_bdev", 00:09:18.516 "thin_provision": false, 00:09:18.516 "num_allocated_clusters": 38, 00:09:18.516 "snapshot": false, 00:09:18.516 "clone": false, 00:09:18.516 "esnap_clone": false 00:09:18.516 } 00:09:18.516 } 00:09:18.516 } 00:09:18.516 ] 00:09:18.516 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:18.516 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:18.516 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.774 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.774 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:18.774 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:19.031 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:19.031 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b5bdf16-85a9-43d5-b625-9efc45d4b47b 00:09:19.289 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05074ea4-878b-4192-acdf-0c3ebc473bb9 00:09:19.547 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.805 00:09:19.805 real 0m17.897s 00:09:19.805 user 0m17.447s 00:09:19.805 sys 0m1.827s 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.805 ************************************ 00:09:19.805 END TEST lvs_grow_clean 00:09:19.805 ************************************ 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.805 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.062 ************************************ 00:09:20.062 START TEST lvs_grow_dirty 00:09:20.062 ************************************ 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.062 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.320 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.320 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.578 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:20.578 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:20.578 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.837 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.837 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.837 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 lvol 150 00:09:21.095 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f705e342-4205-465c-8c97-66bf24542126 00:09:21.095 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.095 14:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.353 [2024-11-02 14:25:13.252743] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.353 [2024-11-02 14:25:13.252845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.353 true 00:09:21.353 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:21.353 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.614 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.614 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.872 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f705e342-4205-465c-8c97-66bf24542126 00:09:22.130 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:22.388 [2024-11-02 14:25:14.412325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.388 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1269131 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1269131 /var/tmp/bdevperf.sock 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1269131 ']' 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.953 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.953 [2024-11-02 14:25:14.750216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:22.953 [2024-11-02 14:25:14.750312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269131 ] 00:09:22.954 [2024-11-02 14:25:14.806333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.954 [2024-11-02 14:25:14.890896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.954 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.954 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:22.954 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.519 Nvme0n1 00:09:23.519 14:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.777 [ 00:09:23.777 { 00:09:23.777 "name": "Nvme0n1", 00:09:23.777 "aliases": [ 00:09:23.777 "f705e342-4205-465c-8c97-66bf24542126" 00:09:23.777 ], 00:09:23.777 "product_name": "NVMe disk", 00:09:23.777 "block_size": 4096, 00:09:23.777 "num_blocks": 38912, 00:09:23.777 "uuid": "f705e342-4205-465c-8c97-66bf24542126", 00:09:23.777 "numa_id": 0, 00:09:23.778 "assigned_rate_limits": { 00:09:23.778 "rw_ios_per_sec": 0, 00:09:23.778 "rw_mbytes_per_sec": 0, 00:09:23.778 "r_mbytes_per_sec": 0, 00:09:23.778 "w_mbytes_per_sec": 0 00:09:23.778 }, 00:09:23.778 "claimed": false, 00:09:23.778 "zoned": false, 00:09:23.778 "supported_io_types": { 00:09:23.778 "read": true, 00:09:23.778 "write": true, 00:09:23.778 "unmap": true, 00:09:23.778 "flush": true, 00:09:23.778 "reset": true, 00:09:23.778 "nvme_admin": true, 00:09:23.778 "nvme_io": true, 00:09:23.778 "nvme_io_md": false, 00:09:23.778 "write_zeroes": true, 00:09:23.778 "zcopy": false, 00:09:23.778 "get_zone_info": false, 00:09:23.778 "zone_management": false, 00:09:23.778 "zone_append": false, 00:09:23.778 "compare": true, 00:09:23.778 "compare_and_write": true, 00:09:23.778 "abort": true, 00:09:23.778 "seek_hole": false, 00:09:23.778 "seek_data": false, 00:09:23.778 "copy": true, 00:09:23.778 "nvme_iov_md": false 00:09:23.778 }, 00:09:23.778 "memory_domains": [ 00:09:23.778 { 00:09:23.778 "dma_device_id": "system", 00:09:23.778 "dma_device_type": 1 00:09:23.778 } 00:09:23.778 ], 00:09:23.778 "driver_specific": { 00:09:23.778 "nvme": [ 00:09:23.778 { 00:09:23.778 "trid": { 00:09:23.778 "trtype": "TCP", 00:09:23.778 "adrfam": "IPv4", 00:09:23.778 "traddr": "10.0.0.2", 00:09:23.778 "trsvcid": "4420", 00:09:23.778 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.778 }, 00:09:23.778 "ctrlr_data": { 00:09:23.778 "cntlid": 1, 00:09:23.778 "vendor_id": "0x8086", 00:09:23.778 "model_number": "SPDK bdev Controller", 00:09:23.778 "serial_number": "SPDK0", 00:09:23.778 "firmware_revision": "24.09.1", 00:09:23.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.778 "oacs": { 00:09:23.778 "security": 0, 00:09:23.778 "format": 0, 00:09:23.778 "firmware": 0, 00:09:23.778 "ns_manage": 0 00:09:23.778 }, 00:09:23.778 "multi_ctrlr": true, 00:09:23.778 "ana_reporting": false 00:09:23.778 }, 00:09:23.778 "vs": { 00:09:23.778 "nvme_version": "1.3" 00:09:23.778 }, 00:09:23.778 "ns_data": { 00:09:23.778 "id": 1, 00:09:23.778 "can_share": true 00:09:23.778 } 00:09:23.778 } 00:09:23.778 ], 00:09:23.778 "mp_policy": "active_passive" 00:09:23.778 } 00:09:23.778 } 00:09:23.778 ] 00:09:23.778 14:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1269266 00:09:23.778 14:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.778 14:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.778 Running I/O for 10 seconds... 00:09:24.712 Latency(us) 00:09:24.712 [2024-11-02T13:25:16.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.712 Nvme0n1 : 1.00 13861.00 54.14 0.00 0.00 0.00 0.00 0.00 00:09:24.712 [2024-11-02T13:25:16.767Z] =================================================================================================================== 00:09:24.712 [2024-11-02T13:25:16.767Z] Total : 13861.00 54.14 0.00 0.00 0.00 0.00 0.00 00:09:24.712 00:09:25.647 14:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:25.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.905 Nvme0n1 : 2.00 14079.50 55.00 0.00 0.00 0.00 0.00 0.00 00:09:25.905 [2024-11-02T13:25:17.960Z] =================================================================================================================== 00:09:25.905 [2024-11-02T13:25:17.960Z] Total : 14079.50 55.00 0.00 0.00 0.00 0.00 0.00 00:09:25.905 00:09:25.905 true 00:09:25.905 14:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:25.905 14:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.163 14:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.163 14:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.163 14:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1269266 00:09:26.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.729 Nvme0n1 : 3.00 14237.00 55.61 0.00 0.00 0.00 0.00 0.00 00:09:26.729 [2024-11-02T13:25:18.784Z] =================================================================================================================== 00:09:26.729 [2024-11-02T13:25:18.784Z] Total : 14237.00 55.61 0.00 0.00 0.00 0.00 0.00 00:09:26.729 00:09:28.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.105 Nvme0n1 : 4.00 14345.25 56.04 0.00 0.00 0.00 0.00 0.00 00:09:28.105 [2024-11-02T13:25:20.160Z] =================================================================================================================== 00:09:28.105 [2024-11-02T13:25:20.160Z] Total : 14345.25 56.04 0.00 0.00 0.00 0.00 0.00 00:09:28.105 00:09:29.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.039 Nvme0n1 : 5.00 14424.40 56.35 0.00 0.00 0.00 0.00 0.00 00:09:29.039 [2024-11-02T13:25:21.094Z] =================================================================================================================== 00:09:29.039 [2024-11-02T13:25:21.094Z] Total : 14424.40 56.35 0.00 0.00 0.00 0.00 0.00 00:09:29.039 00:09:29.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.973 Nvme0n1 : 6.00 14466.33 56.51 0.00 0.00 0.00 0.00 0.00 00:09:29.973 [2024-11-02T13:25:22.028Z] =================================================================================================================== 00:09:29.973 [2024-11-02T13:25:22.028Z] Total : 14466.33 56.51 0.00 0.00 0.00 0.00 0.00 00:09:29.973 00:09:30.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.907 Nvme0n1 : 7.00 14487.86 56.59 0.00 0.00 0.00 0.00 0.00 00:09:30.907 [2024-11-02T13:25:22.962Z] =================================================================================================================== 00:09:30.907 [2024-11-02T13:25:22.962Z] Total : 14487.86 56.59 0.00 0.00 0.00 0.00 0.00 00:09:30.907 00:09:31.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.841 Nvme0n1 : 8.00 14519.38 56.72 0.00 0.00 0.00 0.00 0.00 00:09:31.841 [2024-11-02T13:25:23.896Z] =================================================================================================================== 00:09:31.841 [2024-11-02T13:25:23.896Z] Total : 14519.38 56.72 0.00 0.00 0.00 0.00 0.00 00:09:31.841 00:09:32.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.775 Nvme0n1 : 9.00 14544.00 56.81 0.00 0.00 0.00 0.00 0.00 00:09:32.775 [2024-11-02T13:25:24.830Z] =================================================================================================================== 00:09:32.775 [2024-11-02T13:25:24.830Z] Total : 14544.00 56.81 0.00 0.00 0.00 0.00 0.00 00:09:32.775 00:09:33.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.709 Nvme0n1 : 10.00 14583.40 56.97 0.00 0.00 0.00 0.00 0.00 00:09:33.709 [2024-11-02T13:25:25.764Z] =================================================================================================================== 00:09:33.709 [2024-11-02T13:25:25.764Z] Total : 14583.40 56.97 0.00 0.00 0.00 0.00 0.00 00:09:33.709 00:09:33.967 00:09:33.967 Latency(us) 00:09:33.967 [2024-11-02T13:25:26.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.967 Nvme0n1 : 10.01 14582.41 56.96 0.00 0.00 8772.84 5097.24 17476.27 00:09:33.967 [2024-11-02T13:25:26.022Z] =================================================================================================================== 00:09:33.967 [2024-11-02T13:25:26.022Z] Total : 14582.41 56.96 0.00 0.00 8772.84 5097.24 17476.27 00:09:33.967 { 00:09:33.967 "results": [ 00:09:33.967 { 00:09:33.967 "job": "Nvme0n1", 00:09:33.967 "core_mask": "0x2", 00:09:33.967 "workload": "randwrite", 00:09:33.967 "status": "finished", 00:09:33.967 "queue_depth": 128, 00:09:33.967 "io_size": 4096, 00:09:33.967 "runtime": 10.009455, 00:09:33.967 "iops": 14582.412329142795, 00:09:33.967 "mibps": 56.96254816071404, 00:09:33.967 "io_failed": 0, 00:09:33.967 "io_timeout": 0, 00:09:33.967 "avg_latency_us": 8772.843446721547, 00:09:33.967 "min_latency_us": 5097.2444444444445, 00:09:33.967 "max_latency_us": 17476.266666666666 00:09:33.967 } 00:09:33.967 ], 00:09:33.967 "core_count": 1 00:09:33.967 } 00:09:33.967 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1269131 00:09:33.967 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1269131 ']' 00:09:33.967 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1269131 00:09:33.967 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:33.967 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1269131 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1269131' 00:09:33.968 killing process with pid 1269131 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1269131 00:09:33.968 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.968 00:09:33.968 Latency(us) 00:09:33.968 [2024-11-02T13:25:26.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.968 [2024-11-02T13:25:26.023Z] =================================================================================================================== 00:09:33.968 [2024-11-02T13:25:26.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.968 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1269131 00:09:34.226 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.484 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.741 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:34.741 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1266502 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1266502 00:09:35.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1266502 Killed "${NVMF_APP[@]}" "$@" 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=1270619 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 1270619 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1270619 ']' 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.000 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 [2024-11-02 14:25:26.960985] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:35.000 [2024-11-02 14:25:26.961077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.000 [2024-11-02 14:25:27.028454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.258 [2024-11-02 14:25:27.118471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.258 [2024-11-02 14:25:27.118536] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.258 [2024-11-02 14:25:27.118565] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.258 [2024-11-02 14:25:27.118577] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.258 [2024-11-02 14:25:27.118595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.258 [2024-11-02 14:25:27.118624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.258 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.516 [2024-11-02 14:25:27.516230] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.516 [2024-11-02 14:25:27.516381] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.516 [2024-11-02 14:25:27.516440] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f705e342-4205-465c-8c97-66bf24542126 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f705e342-4205-465c-8c97-66bf24542126 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.516 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.774 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f705e342-4205-465c-8c97-66bf24542126 -t 2000 00:09:36.032 [ 00:09:36.032 { 00:09:36.032 "name": "f705e342-4205-465c-8c97-66bf24542126", 00:09:36.032 "aliases": [ 00:09:36.032 "lvs/lvol" 00:09:36.032 ], 00:09:36.032 "product_name": "Logical Volume", 00:09:36.032 "block_size": 4096, 00:09:36.032 "num_blocks": 38912, 00:09:36.032 "uuid": "f705e342-4205-465c-8c97-66bf24542126", 00:09:36.032 "assigned_rate_limits": { 00:09:36.032 "rw_ios_per_sec": 0, 00:09:36.032 "rw_mbytes_per_sec": 0, 00:09:36.032 "r_mbytes_per_sec": 0, 00:09:36.032 "w_mbytes_per_sec": 0 00:09:36.032 }, 00:09:36.032 "claimed": false, 00:09:36.032 "zoned": false, 00:09:36.032 "supported_io_types": { 00:09:36.032 "read": true, 00:09:36.032 "write": true, 00:09:36.032 "unmap": true, 00:09:36.032 "flush": false, 00:09:36.032 "reset": true, 00:09:36.032 "nvme_admin": false, 00:09:36.032 "nvme_io": false, 00:09:36.032 "nvme_io_md": false, 00:09:36.032 "write_zeroes": true, 00:09:36.032 "zcopy": false, 00:09:36.032 "get_zone_info": false, 00:09:36.032 "zone_management": false, 00:09:36.032 "zone_append": false, 00:09:36.032 "compare": false, 00:09:36.032 "compare_and_write": false, 00:09:36.032 "abort": false, 00:09:36.032 "seek_hole": true, 00:09:36.032 "seek_data": true, 00:09:36.032 "copy": false, 00:09:36.032 "nvme_iov_md": false 00:09:36.032 }, 00:09:36.032 "driver_specific": { 00:09:36.032 "lvol": { 00:09:36.032 "lvol_store_uuid": "2108993d-5b3b-43bf-b81b-59ba5a9641c6", 00:09:36.032 "base_bdev": "aio_bdev", 00:09:36.032 "thin_provision": false, 00:09:36.032 "num_allocated_clusters": 38, 00:09:36.032 "snapshot": false, 00:09:36.032 "clone": false, 00:09:36.032 "esnap_clone": false 00:09:36.032 } 00:09:36.032 } 00:09:36.032 } 00:09:36.032 ] 00:09:36.032 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:36.032 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:36.032 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:36.598 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:36.598 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:36.598 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:36.856 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:36.856 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.114 [2024-11-02 14:25:28.941874] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:37.114 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:37.371 request: 00:09:37.371 { 00:09:37.371 "uuid": "2108993d-5b3b-43bf-b81b-59ba5a9641c6", 00:09:37.371 "method": "bdev_lvol_get_lvstores", 00:09:37.371 "req_id": 1 00:09:37.371 } 00:09:37.371 Got JSON-RPC error response 00:09:37.371 response: 00:09:37.371 { 00:09:37.371 "code": -19, 00:09:37.371 "message": "No such device" 00:09:37.371 } 00:09:37.372 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:37.372 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.372 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.372 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.372 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.629 aio_bdev 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f705e342-4205-465c-8c97-66bf24542126 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f705e342-4205-465c-8c97-66bf24542126 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.629 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.885 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f705e342-4205-465c-8c97-66bf24542126 -t 2000 00:09:38.143 [ 00:09:38.143 { 00:09:38.143 "name": "f705e342-4205-465c-8c97-66bf24542126", 00:09:38.143 "aliases": [ 00:09:38.143 "lvs/lvol" 00:09:38.143 ], 00:09:38.143 "product_name": "Logical Volume", 00:09:38.143 "block_size": 4096, 00:09:38.143 "num_blocks": 38912, 00:09:38.143 "uuid": "f705e342-4205-465c-8c97-66bf24542126", 00:09:38.143 "assigned_rate_limits": { 00:09:38.143 "rw_ios_per_sec": 0, 00:09:38.143 "rw_mbytes_per_sec": 0, 00:09:38.143 "r_mbytes_per_sec": 0, 00:09:38.143 "w_mbytes_per_sec": 0 00:09:38.143 }, 00:09:38.143 "claimed": false, 00:09:38.143 "zoned": false, 00:09:38.143 "supported_io_types": { 00:09:38.143 "read": true, 00:09:38.143 "write": true, 00:09:38.143 "unmap": true, 00:09:38.143 "flush": false, 00:09:38.143 "reset": true, 00:09:38.143 "nvme_admin": false, 00:09:38.143 "nvme_io": false, 00:09:38.143 "nvme_io_md": false, 00:09:38.143 "write_zeroes": true, 00:09:38.143 "zcopy": false, 00:09:38.143 "get_zone_info": false, 00:09:38.143 "zone_management": false, 00:09:38.143 "zone_append": false, 00:09:38.143 "compare": false, 00:09:38.143 "compare_and_write": false, 00:09:38.143 "abort": false, 00:09:38.143 "seek_hole": true, 00:09:38.143 "seek_data": true, 00:09:38.143 "copy": false, 00:09:38.143 "nvme_iov_md": false 00:09:38.143 }, 00:09:38.143 "driver_specific": { 00:09:38.143 "lvol": { 00:09:38.143 "lvol_store_uuid": "2108993d-5b3b-43bf-b81b-59ba5a9641c6", 00:09:38.143 "base_bdev": "aio_bdev", 00:09:38.143 "thin_provision": false, 00:09:38.143 "num_allocated_clusters": 38, 00:09:38.143 "snapshot": false, 00:09:38.143 "clone": false, 00:09:38.143 "esnap_clone": false 00:09:38.143 } 00:09:38.143 } 00:09:38.143 } 00:09:38.143 ] 00:09:38.143 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:38.143 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:38.143 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:38.435 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:38.435 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:38.435 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:38.722 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:38.722 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f705e342-4205-465c-8c97-66bf24542126 00:09:38.981 14:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2108993d-5b3b-43bf-b81b-59ba5a9641c6 00:09:39.239 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.497 00:09:39.497 real 0m19.597s 00:09:39.497 user 0m49.625s 00:09:39.497 sys 0m4.546s 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.497 ************************************ 00:09:39.497 END TEST lvs_grow_dirty 00:09:39.497 ************************************ 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:39.497 nvmf_trace.0 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:39.497 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:39.498 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.498 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:39.498 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.498 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.498 rmmod nvme_tcp 00:09:39.756 rmmod nvme_fabrics 00:09:39.756 rmmod nvme_keyring 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 1270619 ']' 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 1270619 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1270619 ']' 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1270619 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270619 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270619' 00:09:39.756 killing process with pid 1270619 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1270619 00:09:39.756 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1270619 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.016 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.922 00:09:41.922 real 0m42.925s 00:09:41.922 user 1m13.169s 00:09:41.922 sys 0m8.324s 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.922 ************************************ 00:09:41.922 END TEST nvmf_lvs_grow 00:09:41.922 ************************************ 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.922 14:25:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.182 ************************************ 00:09:42.182 START TEST nvmf_bdev_io_wait 00:09:42.182 ************************************ 00:09:42.182 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:42.182 * Looking for test storage... 00:09:42.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.182 --rc genhtml_branch_coverage=1 00:09:42.182 --rc genhtml_function_coverage=1 00:09:42.182 --rc genhtml_legend=1 00:09:42.182 --rc geninfo_all_blocks=1 00:09:42.182 --rc geninfo_unexecuted_blocks=1 00:09:42.182 00:09:42.182 ' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.182 --rc genhtml_branch_coverage=1 00:09:42.182 --rc genhtml_function_coverage=1 00:09:42.182 --rc genhtml_legend=1 00:09:42.182 --rc geninfo_all_blocks=1 00:09:42.182 --rc geninfo_unexecuted_blocks=1 00:09:42.182 00:09:42.182 ' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.182 --rc genhtml_branch_coverage=1 00:09:42.182 --rc genhtml_function_coverage=1 00:09:42.182 --rc genhtml_legend=1 00:09:42.182 --rc geninfo_all_blocks=1 00:09:42.182 --rc geninfo_unexecuted_blocks=1 00:09:42.182 00:09:42.182 ' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.182 --rc genhtml_branch_coverage=1 00:09:42.182 --rc genhtml_function_coverage=1 00:09:42.182 --rc genhtml_legend=1 00:09:42.182 --rc geninfo_all_blocks=1 00:09:42.182 --rc geninfo_unexecuted_blocks=1 00:09:42.182 00:09:42.182 ' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.182 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.183 14:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:44.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:44.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:44.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:44.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.715 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:09:44.716 00:09:44.716 --- 10.0.0.2 ping statistics --- 00:09:44.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.716 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:44.716 00:09:44.716 --- 10.0.0.1 ping statistics --- 00:09:44.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.716 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=1273162 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 1273162 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1273162 ']' 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 [2024-11-02 14:25:36.386300] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.716 [2024-11-02 14:25:36.386398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.716 [2024-11-02 14:25:36.451615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.716 [2024-11-02 14:25:36.541753] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.716 [2024-11-02 14:25:36.541828] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.716 [2024-11-02 14:25:36.541856] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.716 [2024-11-02 14:25:36.541867] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.716 [2024-11-02 14:25:36.541876] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.716 [2024-11-02 14:25:36.541968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.716 [2024-11-02 14:25:36.542044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.716 [2024-11-02 14:25:36.542113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.716 [2024-11-02 14:25:36.542116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 [2024-11-02 14:25:36.726640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.716 Malloc0 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.716 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.975 [2024-11-02 14:25:36.786211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1273303 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1273305 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1273307 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:44.975 { 00:09:44.975 "params": { 00:09:44.975 "name": "Nvme$subsystem", 00:09:44.975 "trtype": "$TEST_TRANSPORT", 00:09:44.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.975 "adrfam": "ipv4", 00:09:44.975 "trsvcid": "$NVMF_PORT", 00:09:44.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.975 "hdgst": ${hdgst:-false}, 00:09:44.975 "ddgst": ${ddgst:-false} 00:09:44.975 }, 00:09:44.975 "method": "bdev_nvme_attach_controller" 00:09:44.975 } 00:09:44.975 EOF 00:09:44.975 )") 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:44.975 { 00:09:44.975 "params": { 00:09:44.975 "name": "Nvme$subsystem", 00:09:44.975 "trtype": "$TEST_TRANSPORT", 00:09:44.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.975 "adrfam": "ipv4", 00:09:44.975 "trsvcid": "$NVMF_PORT", 00:09:44.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.975 "hdgst": ${hdgst:-false}, 00:09:44.975 "ddgst": ${ddgst:-false} 00:09:44.975 }, 00:09:44.975 "method": "bdev_nvme_attach_controller" 00:09:44.975 } 00:09:44.975 EOF 00:09:44.975 )") 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1273309 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:44.975 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:44.975 { 00:09:44.975 "params": { 00:09:44.975 "name": "Nvme$subsystem", 00:09:44.975 "trtype": "$TEST_TRANSPORT", 00:09:44.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.975 "adrfam": "ipv4", 00:09:44.975 "trsvcid": "$NVMF_PORT", 00:09:44.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.975 "hdgst": ${hdgst:-false}, 00:09:44.975 "ddgst": ${ddgst:-false} 00:09:44.975 }, 00:09:44.975 "method": "bdev_nvme_attach_controller" 00:09:44.976 } 00:09:44.976 EOF 00:09:44.976 )") 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:44.976 { 00:09:44.976 "params": { 00:09:44.976 "name": "Nvme$subsystem", 00:09:44.976 "trtype": "$TEST_TRANSPORT", 00:09:44.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.976 "adrfam": "ipv4", 00:09:44.976 "trsvcid": "$NVMF_PORT", 00:09:44.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.976 "hdgst": ${hdgst:-false}, 00:09:44.976 "ddgst": ${ddgst:-false} 00:09:44.976 }, 00:09:44.976 "method": "bdev_nvme_attach_controller" 00:09:44.976 } 00:09:44.976 EOF 00:09:44.976 )") 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1273303 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:44.976 "params": { 00:09:44.976 "name": "Nvme1", 00:09:44.976 "trtype": "tcp", 00:09:44.976 "traddr": "10.0.0.2", 00:09:44.976 "adrfam": "ipv4", 00:09:44.976 "trsvcid": "4420", 00:09:44.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.976 "hdgst": false, 00:09:44.976 "ddgst": false 00:09:44.976 }, 00:09:44.976 "method": "bdev_nvme_attach_controller" 00:09:44.976 }' 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:44.976 "params": { 00:09:44.976 "name": "Nvme1", 00:09:44.976 "trtype": "tcp", 00:09:44.976 "traddr": "10.0.0.2", 00:09:44.976 "adrfam": "ipv4", 00:09:44.976 "trsvcid": "4420", 00:09:44.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.976 "hdgst": false, 00:09:44.976 "ddgst": false 00:09:44.976 }, 00:09:44.976 "method": "bdev_nvme_attach_controller" 00:09:44.976 }' 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:44.976 "params": { 00:09:44.976 "name": "Nvme1", 00:09:44.976 "trtype": "tcp", 00:09:44.976 "traddr": "10.0.0.2", 00:09:44.976 "adrfam": "ipv4", 00:09:44.976 "trsvcid": "4420", 00:09:44.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.976 "hdgst": false, 00:09:44.976 "ddgst": false 00:09:44.976 }, 00:09:44.976 "method": "bdev_nvme_attach_controller" 00:09:44.976 }' 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:44.976 14:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:44.976 "params": { 00:09:44.976 "name": "Nvme1", 00:09:44.976 "trtype": "tcp", 00:09:44.976 "traddr": "10.0.0.2", 00:09:44.976 "adrfam": "ipv4", 00:09:44.976 "trsvcid": "4420", 00:09:44.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.976 "hdgst": false, 00:09:44.976 "ddgst": false 00:09:44.976 }, 00:09:44.976 "method": "bdev_nvme_attach_controller" 00:09:44.976 }' 00:09:44.976 [2024-11-02 14:25:36.836434] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.976 [2024-11-02 14:25:36.836435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.976 [2024-11-02 14:25:36.836435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.976 [2024-11-02 14:25:36.836522] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 14:25:36.836522] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 14:25:36.836523] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:44.976 --proc-type=auto ] 00:09:44.976 --proc-type=auto ] 00:09:44.976 [2024-11-02 14:25:36.838112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.976 [2024-11-02 14:25:36.838194] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:44.976 [2024-11-02 14:25:37.011712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.234 [2024-11-02 14:25:37.086555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.234 [2024-11-02 14:25:37.109971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.234 [2024-11-02 14:25:37.184452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.234 [2024-11-02 14:25:37.206682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.234 [2024-11-02 14:25:37.282032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:45.492 [2024-11-02 14:25:37.308687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.492 [2024-11-02 14:25:37.378133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:45.492 Running I/O for 1 seconds... 00:09:45.751 Running I/O for 1 seconds... 00:09:45.751 Running I/O for 1 seconds... 00:09:46.009 Running I/O for 1 seconds... 00:09:46.574 6357.00 IOPS, 24.83 MiB/s 00:09:46.574 Latency(us) 00:09:46.574 [2024-11-02T13:25:38.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.574 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:46.574 Nvme1n1 : 1.02 6336.01 24.75 0.00 0.00 19868.15 6456.51 34175.81 00:09:46.574 [2024-11-02T13:25:38.629Z] =================================================================================================================== 00:09:46.574 [2024-11-02T13:25:38.629Z] Total : 6336.01 24.75 0.00 0.00 19868.15 6456.51 34175.81 00:09:46.574 9485.00 IOPS, 37.05 MiB/s 00:09:46.574 Latency(us) 00:09:46.574 [2024-11-02T13:25:38.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.574 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:46.574 Nvme1n1 : 1.01 9524.86 37.21 0.00 0.00 13370.90 7961.41 23204.60 00:09:46.574 [2024-11-02T13:25:38.629Z] =================================================================================================================== 00:09:46.574 [2024-11-02T13:25:38.629Z] Total : 9524.86 37.21 0.00 0.00 13370.90 7961.41 23204.60 00:09:46.832 14:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1273305 00:09:46.832 14:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1273307 00:09:46.832 163424.00 IOPS, 638.38 MiB/s 00:09:46.832 Latency(us) 00:09:46.832 [2024-11-02T13:25:38.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.832 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:46.832 Nvme1n1 : 1.00 162962.18 636.57 0.00 0.00 780.62 446.01 2779.21 00:09:46.832 [2024-11-02T13:25:38.887Z] =================================================================================================================== 00:09:46.832 [2024-11-02T13:25:38.887Z] Total : 162962.18 636.57 0.00 0.00 780.62 446.01 2779.21 00:09:47.090 8162.00 IOPS, 31.88 MiB/s 00:09:47.090 Latency(us) 00:09:47.090 [2024-11-02T13:25:39.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.090 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:47.090 Nvme1n1 : 1.01 8282.88 32.35 0.00 0.00 15407.88 4538.97 52817.16 00:09:47.090 [2024-11-02T13:25:39.145Z] =================================================================================================================== 00:09:47.090 [2024-11-02T13:25:39.145Z] Total : 8282.88 32.35 0.00 0.00 15407.88 4538.97 52817.16 00:09:47.090 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1273309 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.348 rmmod nvme_tcp 00:09:47.348 rmmod nvme_fabrics 00:09:47.348 rmmod nvme_keyring 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 1273162 ']' 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 1273162 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1273162 ']' 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1273162 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273162 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.348 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273162' 00:09:47.348 killing process with pid 1273162 00:09:47.349 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1273162 00:09:47.349 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1273162 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.607 14:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.145 00:09:50.145 real 0m7.590s 00:09:50.145 user 0m17.736s 00:09:50.145 sys 0m3.776s 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.145 ************************************ 00:09:50.145 END TEST nvmf_bdev_io_wait 00:09:50.145 ************************************ 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.145 ************************************ 00:09:50.145 START TEST nvmf_queue_depth 00:09:50.145 ************************************ 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:50.145 * Looking for test storage... 00:09:50.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.145 --rc genhtml_branch_coverage=1 00:09:50.145 --rc genhtml_function_coverage=1 00:09:50.145 --rc genhtml_legend=1 00:09:50.145 --rc geninfo_all_blocks=1 00:09:50.145 --rc geninfo_unexecuted_blocks=1 00:09:50.145 00:09:50.145 ' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.145 --rc genhtml_branch_coverage=1 00:09:50.145 --rc genhtml_function_coverage=1 00:09:50.145 --rc genhtml_legend=1 00:09:50.145 --rc geninfo_all_blocks=1 00:09:50.145 --rc geninfo_unexecuted_blocks=1 00:09:50.145 00:09:50.145 ' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.145 --rc genhtml_branch_coverage=1 00:09:50.145 --rc genhtml_function_coverage=1 00:09:50.145 --rc genhtml_legend=1 00:09:50.145 --rc geninfo_all_blocks=1 00:09:50.145 --rc geninfo_unexecuted_blocks=1 00:09:50.145 00:09:50.145 ' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.145 --rc genhtml_branch_coverage=1 00:09:50.145 --rc genhtml_function_coverage=1 00:09:50.145 --rc genhtml_legend=1 00:09:50.145 --rc geninfo_all_blocks=1 00:09:50.145 --rc geninfo_unexecuted_blocks=1 00:09:50.145 00:09:50.145 ' 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.145 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.146 14:25:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:52.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:52.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:52.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:52.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.048 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:09:52.049 00:09:52.049 --- 10.0.0.2 ping statistics --- 00:09:52.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.049 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:09:52.049 00:09:52.049 --- 10.0.0.1 ping statistics --- 00:09:52.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.049 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=1275545 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 1275545 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1275545 ']' 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.049 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.049 [2024-11-02 14:25:44.029029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.049 [2024-11-02 14:25:44.029105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.308 [2024-11-02 14:25:44.102689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.308 [2024-11-02 14:25:44.192437] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.308 [2024-11-02 14:25:44.192500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.308 [2024-11-02 14:25:44.192516] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.308 [2024-11-02 14:25:44.192529] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.308 [2024-11-02 14:25:44.192541] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.308 [2024-11-02 14:25:44.192580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.308 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.308 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 [2024-11-02 14:25:44.338239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.568 Malloc0 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.568 [2024-11-02 14:25:44.406509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1275664 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1275664 /var/tmp/bdevperf.sock 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1275664 ']' 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.568 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.568 [2024-11-02 14:25:44.454986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.568 [2024-11-02 14:25:44.455063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275664 ] 00:09:52.568 [2024-11-02 14:25:44.517309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.568 [2024-11-02 14:25:44.608755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.827 NVMe0n1 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.827 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:53.085 Running I/O for 10 seconds... 00:09:54.957 7735.00 IOPS, 30.21 MiB/s [2024-11-02T13:25:48.390Z] 7682.00 IOPS, 30.01 MiB/s [2024-11-02T13:25:49.327Z] 7816.67 IOPS, 30.53 MiB/s [2024-11-02T13:25:50.264Z] 7753.75 IOPS, 30.29 MiB/s [2024-11-02T13:25:51.199Z] 7779.40 IOPS, 30.39 MiB/s [2024-11-02T13:25:52.133Z] 7830.00 IOPS, 30.59 MiB/s [2024-11-02T13:25:53.186Z] 7794.86 IOPS, 30.45 MiB/s [2024-11-02T13:25:54.122Z] 7799.25 IOPS, 30.47 MiB/s [2024-11-02T13:25:55.057Z] 7830.44 IOPS, 30.59 MiB/s [2024-11-02T13:25:55.315Z] 7827.20 IOPS, 30.57 MiB/s 00:10:03.260 Latency(us) 00:10:03.260 [2024-11-02T13:25:55.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.261 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:03.261 Verification LBA range: start 0x0 length 0x4000 00:10:03.261 NVMe0n1 : 10.09 7855.10 30.68 0.00 0.00 129671.42 20777.34 76507.21 00:10:03.261 [2024-11-02T13:25:55.316Z] =================================================================================================================== 00:10:03.261 [2024-11-02T13:25:55.316Z] Total : 7855.10 30.68 0.00 0.00 129671.42 20777.34 76507.21 00:10:03.261 { 00:10:03.261 "results": [ 00:10:03.261 { 00:10:03.261 "job": "NVMe0n1", 00:10:03.261 "core_mask": "0x1", 00:10:03.261 "workload": "verify", 00:10:03.261 "status": "finished", 00:10:03.261 "verify_range": { 00:10:03.261 "start": 0, 00:10:03.261 "length": 16384 00:10:03.261 }, 00:10:03.261 "queue_depth": 1024, 00:10:03.261 "io_size": 4096, 00:10:03.261 "runtime": 10.089753, 00:10:03.261 "iops": 7855.098137684838, 00:10:03.261 "mibps": 30.683977100331397, 00:10:03.261 "io_failed": 0, 00:10:03.261 "io_timeout": 0, 00:10:03.261 "avg_latency_us": 129671.4227740206, 00:10:03.261 "min_latency_us": 20777.33925925926, 00:10:03.261 "max_latency_us": 76507.21185185185 00:10:03.261 } 00:10:03.261 ], 00:10:03.261 "core_count": 1 00:10:03.261 } 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1275664 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1275664 ']' 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1275664 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275664 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275664' 00:10:03.261 killing process with pid 1275664 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1275664 00:10:03.261 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.261 00:10:03.261 Latency(us) 00:10:03.261 [2024-11-02T13:25:55.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.261 [2024-11-02T13:25:55.316Z] =================================================================================================================== 00:10:03.261 [2024-11-02T13:25:55.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.261 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1275664 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.519 rmmod nvme_tcp 00:10:03.519 rmmod nvme_fabrics 00:10:03.519 rmmod nvme_keyring 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 1275545 ']' 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 1275545 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1275545 ']' 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1275545 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275545 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275545' 00:10:03.519 killing process with pid 1275545 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1275545 00:10:03.519 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1275545 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.777 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.330 00:10:06.330 real 0m16.131s 00:10:06.330 user 0m21.956s 00:10:06.330 sys 0m3.331s 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.330 ************************************ 00:10:06.330 END TEST nvmf_queue_depth 00:10:06.330 ************************************ 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.330 ************************************ 00:10:06.330 START TEST nvmf_target_multipath 00:10:06.330 ************************************ 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.330 * Looking for test storage... 00:10:06.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:06.330 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.331 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:06.332 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:06.332 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.332 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:08.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:08.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:08.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:08.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:08.237 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:10:08.238 00:10:08.238 --- 10.0.0.2 ping statistics --- 00:10:08.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.238 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:08.238 00:10:08.238 --- 10.0.0.1 ping statistics --- 00:10:08.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.238 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:08.238 only one NIC for nvmf test 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.238 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.238 rmmod nvme_tcp 00:10:08.238 rmmod nvme_fabrics 00:10:08.238 rmmod nvme_keyring 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.497 14:26:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.405 00:10:10.405 real 0m4.583s 00:10:10.405 user 0m0.925s 00:10:10.405 sys 0m1.602s 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.405 ************************************ 00:10:10.405 END TEST nvmf_target_multipath 00:10:10.405 ************************************ 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.405 ************************************ 00:10:10.405 START TEST nvmf_zcopy 00:10:10.405 ************************************ 00:10:10.405 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.664 * Looking for test storage... 00:10:10.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.664 --rc genhtml_branch_coverage=1 00:10:10.664 --rc genhtml_function_coverage=1 00:10:10.664 --rc genhtml_legend=1 00:10:10.664 --rc geninfo_all_blocks=1 00:10:10.664 --rc geninfo_unexecuted_blocks=1 00:10:10.664 00:10:10.664 ' 00:10:10.664 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.665 --rc genhtml_branch_coverage=1 00:10:10.665 --rc genhtml_function_coverage=1 00:10:10.665 --rc genhtml_legend=1 00:10:10.665 --rc geninfo_all_blocks=1 00:10:10.665 --rc geninfo_unexecuted_blocks=1 00:10:10.665 00:10:10.665 ' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.665 --rc genhtml_branch_coverage=1 00:10:10.665 --rc genhtml_function_coverage=1 00:10:10.665 --rc genhtml_legend=1 00:10:10.665 --rc geninfo_all_blocks=1 00:10:10.665 --rc geninfo_unexecuted_blocks=1 00:10:10.665 00:10:10.665 ' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.665 --rc genhtml_branch_coverage=1 00:10:10.665 --rc genhtml_function_coverage=1 00:10:10.665 --rc genhtml_legend=1 00:10:10.665 --rc geninfo_all_blocks=1 00:10:10.665 --rc geninfo_unexecuted_blocks=1 00:10:10.665 00:10:10.665 ' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.665 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.199 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:10:13.200 00:10:13.200 --- 10.0.0.2 ping statistics --- 00:10:13.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.200 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:13.200 00:10:13.200 --- 10.0.0.1 ping statistics --- 00:10:13.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.200 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=1280813 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 1280813 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1280813 ']' 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.200 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 [2024-11-02 14:26:04.851185] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.200 [2024-11-02 14:26:04.851288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.200 [2024-11-02 14:26:04.917011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.200 [2024-11-02 14:26:05.008928] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.200 [2024-11-02 14:26:05.008987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.200 [2024-11-02 14:26:05.009016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.200 [2024-11-02 14:26:05.009034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.200 [2024-11-02 14:26:05.009044] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.200 [2024-11-02 14:26:05.009073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 [2024-11-02 14:26:05.155861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 [2024-11-02 14:26:05.172092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 malloc0 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:13.200 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:13.200 { 00:10:13.200 "params": { 00:10:13.200 "name": "Nvme$subsystem", 00:10:13.200 "trtype": "$TEST_TRANSPORT", 00:10:13.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.200 "adrfam": "ipv4", 00:10:13.201 "trsvcid": "$NVMF_PORT", 00:10:13.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.201 "hdgst": ${hdgst:-false}, 00:10:13.201 "ddgst": ${ddgst:-false} 00:10:13.201 }, 00:10:13.201 "method": "bdev_nvme_attach_controller" 00:10:13.201 } 00:10:13.201 EOF 00:10:13.201 )") 00:10:13.201 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:13.201 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:13.201 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:13.201 14:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:13.201 "params": { 00:10:13.201 "name": "Nvme1", 00:10:13.201 "trtype": "tcp", 00:10:13.201 "traddr": "10.0.0.2", 00:10:13.201 "adrfam": "ipv4", 00:10:13.201 "trsvcid": "4420", 00:10:13.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.201 "hdgst": false, 00:10:13.201 "ddgst": false 00:10:13.201 }, 00:10:13.201 "method": "bdev_nvme_attach_controller" 00:10:13.201 }' 00:10:13.459 [2024-11-02 14:26:05.271936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.459 [2024-11-02 14:26:05.272008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280920 ] 00:10:13.459 [2024-11-02 14:26:05.335913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.459 [2024-11-02 14:26:05.428915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.718 Running I/O for 10 seconds... 00:10:15.589 5364.00 IOPS, 41.91 MiB/s [2024-11-02T13:26:09.020Z] 5427.00 IOPS, 42.40 MiB/s [2024-11-02T13:26:09.955Z] 5445.67 IOPS, 42.54 MiB/s [2024-11-02T13:26:10.890Z] 5459.00 IOPS, 42.65 MiB/s [2024-11-02T13:26:11.826Z] 5458.80 IOPS, 42.65 MiB/s [2024-11-02T13:26:12.761Z] 5474.83 IOPS, 42.77 MiB/s [2024-11-02T13:26:13.696Z] 5475.00 IOPS, 42.77 MiB/s [2024-11-02T13:26:15.072Z] 5481.38 IOPS, 42.82 MiB/s [2024-11-02T13:26:15.654Z] 5483.89 IOPS, 42.84 MiB/s [2024-11-02T13:26:15.913Z] 5483.50 IOPS, 42.84 MiB/s 00:10:23.858 Latency(us) 00:10:23.858 [2024-11-02T13:26:15.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.858 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:23.858 Verification LBA range: start 0x0 length 0x1000 00:10:23.858 Nvme1n1 : 10.02 5485.80 42.86 0.00 0.00 23269.74 3155.44 32428.18 00:10:23.858 [2024-11-02T13:26:15.913Z] =================================================================================================================== 00:10:23.858 [2024-11-02T13:26:15.913Z] Total : 5485.80 42.86 0.00 0.00 23269.74 3155.44 32428.18 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1282128 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:23.858 { 00:10:23.858 "params": { 00:10:23.858 "name": "Nvme$subsystem", 00:10:23.858 "trtype": "$TEST_TRANSPORT", 00:10:23.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.858 "adrfam": "ipv4", 00:10:23.858 "trsvcid": "$NVMF_PORT", 00:10:23.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.858 "hdgst": ${hdgst:-false}, 00:10:23.858 "ddgst": ${ddgst:-false} 00:10:23.858 }, 00:10:23.858 "method": "bdev_nvme_attach_controller" 00:10:23.858 } 00:10:23.858 EOF 00:10:23.858 )") 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:23.858 [2024-11-02 14:26:15.906856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.858 [2024-11-02 14:26:15.906904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:23.858 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:23.858 "params": { 00:10:23.858 "name": "Nvme1", 00:10:23.858 "trtype": "tcp", 00:10:23.858 "traddr": "10.0.0.2", 00:10:23.858 "adrfam": "ipv4", 00:10:23.858 "trsvcid": "4420", 00:10:23.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.858 "hdgst": false, 00:10:23.858 "ddgst": false 00:10:23.858 }, 00:10:23.858 "method": "bdev_nvme_attach_controller" 00:10:23.858 }' 00:10:24.117 [2024-11-02 14:26:15.914832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.914859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.922840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.922865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.930857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.930879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.938875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.938898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.946898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.946920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.948749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:24.117 [2024-11-02 14:26:15.948821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282128 ] 00:10:24.117 [2024-11-02 14:26:15.954915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.954935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.962939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.962959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.970956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.970976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.978978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.978999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.987019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.987045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:15.995042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:15.995066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.003065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.003098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.011087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.011111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.013874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.117 [2024-11-02 14:26:16.019127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.019157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.027168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.027209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.035156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.035180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.043176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.043201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.051199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.051224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.059222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.059247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.067273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.067334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.075329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.075360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.083292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.083328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.091329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.091350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.099348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.099369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.107362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.107383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.109947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.117 [2024-11-02 14:26:16.115381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.115401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.123402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.123424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.131453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.131486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.139472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.139508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.147496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.147558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.155526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.155580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-11-02 14:26:16.163571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-11-02 14:26:16.163610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.171601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.171640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.179582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.179610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.187649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.187695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.195668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.195706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.203694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.203743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.211668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.211694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.219679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-11-02 14:26:16.219703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-11-02 14:26:16.227719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.227746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.235737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.235770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.243767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.243796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.251780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.251808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.259801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.259830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.267822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.267849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.275845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.275872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.283868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.283894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.291893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.291918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.299922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.299962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.307948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.307977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.315973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.316002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.323990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.324016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.332020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.332051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.340034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.340060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 Running I/O for 5 seconds... 00:10:24.377 [2024-11-02 14:26:16.348057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.348083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.362199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.362230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.373958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.373990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.387384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.387413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.397680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.397712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.410121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.410153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.377 [2024-11-02 14:26:16.421874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.377 [2024-11-02 14:26:16.421905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.433153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.433184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.444512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.444540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.456101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.456131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.467961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.467993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.479748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.479779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.491337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.491366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.503508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.503536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.515373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.515401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.527306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.527335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.539190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.539221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.551203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.551234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.563413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.563440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.575005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.575036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.586985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.587016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.598786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.598817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.610470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.610497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.622329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.622357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.634074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.634105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.645772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.645803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.657244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.657283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.669624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.669655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.636 [2024-11-02 14:26:16.681157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.636 [2024-11-02 14:26:16.681187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-11-02 14:26:16.692943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-11-02 14:26:16.692974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.704763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.704795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.716758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.716789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.728756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.728787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.740070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.740100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.751910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.751940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.763455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.763484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.775171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.775201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.786951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.786981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.798349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.798376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.809991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.810022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.822068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.822098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.833717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.833748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.845431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.845459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.858606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.858638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.868701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.868744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.881135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.881167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.893009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.893041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.904844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.904874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.916592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.916623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.928396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.928424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.895 [2024-11-02 14:26:16.940180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.895 [2024-11-02 14:26:16.940210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:16.953635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:16.953666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:16.964711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:16.964742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:16.977142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:16.977173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:16.988862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:16.988892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.000337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.000366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.012283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.012327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.023961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.023991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.035578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.035609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.047669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.047701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.059829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.059860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.072123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.072155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.085506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.085535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.096648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.096679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.108817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.108849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.120527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.120554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.132143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.132173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.143583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.143614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.154990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.155020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.166270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.166321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.178005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.178035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.189360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.189388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-11-02 14:26:17.201345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-11-02 14:26:17.201372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-11-02 14:26:17.212561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-11-02 14:26:17.212588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.226492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.226519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.237769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.237800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.249496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.249524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.261447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.261475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.273465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.273493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.285773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.285805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.299712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.299744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.309997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.310028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.322639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.322670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.334591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.334622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.346446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.346475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 10743.00 IOPS, 83.93 MiB/s [2024-11-02T13:26:17.467Z] [2024-11-02 14:26:17.358456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.358484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.370304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.370332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.382340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.382368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.394383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.394418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.406536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.406579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.418614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.418646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.432430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.432458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.443116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.443146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.454763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.454793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-11-02 14:26:17.466276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-11-02 14:26:17.466320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.477921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.477951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.489723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.489754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.501660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.501690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.513053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.513084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.524870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.524900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.536460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.536489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-11-02 14:26:17.548161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-11-02 14:26:17.548192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.560209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.560240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.572046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.572076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.583552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.583597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.594635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.594666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.606106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.606136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.617803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.617843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.629503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.629545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.641079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.641110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.652832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.652864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.664399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.664426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.676066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.676096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.687697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.687727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.699510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.699553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.711973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.712004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-11-02 14:26:17.724089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-11-02 14:26:17.724119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.736104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.736134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.747748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.747779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.761073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.761105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.772056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.772087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.783597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.783628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.795516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.795559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.808904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.808935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.819903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.819934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.832240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.832281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.843927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.843965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.856056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.856086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-11-02 14:26:17.868174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-11-02 14:26:17.868204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.879918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.879949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.891442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.891470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.904588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.904619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.915026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.915056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.927156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.927186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.938606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.938636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.949443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.949470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.960697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.960739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.971843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.971870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.930 [2024-11-02 14:26:17.982806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-11-02 14:26:17.982836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:17.993891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:17.993919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.004845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.004872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.015785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.015813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.026736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.026764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.039628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.039657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.049398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.049425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.060646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.060673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.071222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.071249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.082306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.082334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.093487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.093515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.104414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.104442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.115773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.115801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.127684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.127716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.139653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.139684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.151397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.151424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.163604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.163635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-11-02 14:26:18.175129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-11-02 14:26:18.175160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-11-02 14:26:18.188983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-11-02 14:26:18.189013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-11-02 14:26:18.200056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-11-02 14:26:18.200086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-11-02 14:26:18.211274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-11-02 14:26:18.211320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-11-02 14:26:18.222523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-11-02 14:26:18.222551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-11-02 14:26:18.233768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-11-02 14:26:18.233799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.245081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.245113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.256554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.256595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.268361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.268390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.279939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.279970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.291624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.291671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.303087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.303117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.316946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.316976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.328409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.328437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.340641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.340672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 10869.50 IOPS, 84.92 MiB/s [2024-11-02T13:26:18.502Z] [2024-11-02 14:26:18.351917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.351947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.363480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.363508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.375134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.375165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.388808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.388839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.399224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.399263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.412035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.412065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.424011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.424042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.435659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.435691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.447570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.447598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.459696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.459727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.471482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.471510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.482957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.482987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.447 [2024-11-02 14:26:18.494602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.447 [2024-11-02 14:26:18.494641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.506517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.506546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.518420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.518448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.529848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.529878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.541577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.541608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.553212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.553244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.564682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.564714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.576509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.576551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.588052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.588083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.599058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.599088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.611265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.611309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.623318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.623346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.635150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.635180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.647001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.706 [2024-11-02 14:26:18.647032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.706 [2024-11-02 14:26:18.658481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.658509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.669706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.669736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.681458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.681485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.693244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.693283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.704854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.704884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.716555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.716611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.728417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.728445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.741938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.741969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-11-02 14:26:18.752750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-11-02 14:26:18.752781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.764325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.764353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.776687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.776718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.788329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.788357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.800009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.800039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.812475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.812502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.824531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.824558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.965 [2024-11-02 14:26:18.836640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.965 [2024-11-02 14:26:18.836670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.848383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.848412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.860014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.860044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.872008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.872038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.883535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.883580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.895322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.895350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.907054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.907085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.918574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.918605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.930206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.930236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.941933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.941972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.953664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.953694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.965182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.965213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.977125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.977155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.988345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.988373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:18.999962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:18.999991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-11-02 14:26:19.011746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-11-02 14:26:19.011776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.224 [2024-11-02 14:26:19.023625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.224 [2024-11-02 14:26:19.023656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.224 [2024-11-02 14:26:19.035435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.224 [2024-11-02 14:26:19.035463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.224 [2024-11-02 14:26:19.047730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.224 [2024-11-02 14:26:19.047761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.059924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.059955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.072083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.072113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.083502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.083530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.095242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.095281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.106483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.106511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.118741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.118771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.130491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.130521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.141527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.141555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.152304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.152332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.165146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.165182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.175542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.175570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.186399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.186427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.197167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.197195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.208089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.208131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.220941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.220969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.231290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.231318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.241898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.241926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.253074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.253101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.264217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.264245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-11-02 14:26:19.277151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.225 [2024-11-02 14:26:19.277178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.286694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.286722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.298206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.298233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.308603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.308631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.319036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.319063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.329817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.329845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.340878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.340905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.352048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.352075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 10920.00 IOPS, 85.31 MiB/s [2024-11-02T13:26:19.539Z] [2024-11-02 14:26:19.362854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.362881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.374523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.374570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.385988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.386019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.397518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.397545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.409101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.409131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.421084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.421114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.432482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.432510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.444192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.444223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.456007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.456038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.467646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.467676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.479089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.479120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.491115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.491146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.503446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.503474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.515381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.515409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.484 [2024-11-02 14:26:19.527413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.484 [2024-11-02 14:26:19.527441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.541326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.541354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.551879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.551910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.564624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.564657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.576202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.576233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.588191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.588222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.599601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.599642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.611158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.611188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.622450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.622478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.633937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.633968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.645821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.645852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.657979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.658010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.670289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.670345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.682252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.682314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.694206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.694236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.705928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.705958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.717961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.717991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.729555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.729599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.741028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.741060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.752578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.752624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.764671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.764702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.776797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.776828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.743 [2024-11-02 14:26:19.788458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.743 [2024-11-02 14:26:19.788487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.799574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.799601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.811130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.811160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.822640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.822670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.834082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.834112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.845988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.846018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.859848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.859879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.870671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.870702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.882379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.882407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.894264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.894309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.905923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.905953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.917882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.917912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.929841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.929872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.942077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.942108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.953976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.954006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.965728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.965758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.977135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.977166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:19.989104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:19.989134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:20.000976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:20.001006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:20.013013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:20.013053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:20.024821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:20.024854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:20.037047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:20.037117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.002 [2024-11-02 14:26:20.048883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.002 [2024-11-02 14:26:20.048916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.060946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.060978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.072863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.072893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.085034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.085065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.097336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.097364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.108926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.108956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.120808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.120838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.132587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.132618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.144377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.144404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.156104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.156134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.167648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.167678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.178907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.178937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.190353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.190381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.202147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.202178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.214332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.214360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.226430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.226459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.238081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.238111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.249919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.249950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.261505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.261557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.273533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.273577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.285221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.285252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.296908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.296939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.261 [2024-11-02 14:26:20.308522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.261 [2024-11-02 14:26:20.308550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.320642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.320673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.332641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.332672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.344652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.344683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 10890.75 IOPS, 85.08 MiB/s [2024-11-02T13:26:20.575Z] [2024-11-02 14:26:20.356114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.356145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.368018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.368048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.379381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.379410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.390082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.390110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.402721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.402748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.412195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.412222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.423499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.423527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.434056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.434083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.444940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.444968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.457689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.457717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.466916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.466944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.478237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.478280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.489159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.489186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.499816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.499843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.512605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.512647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.522103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.522131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.533818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.533846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.546822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.546850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.557315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.557343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.520 [2024-11-02 14:26:20.567840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.520 [2024-11-02 14:26:20.567867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.578537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.578565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.589225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.589253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.600440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.600468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.611591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.611620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.622400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.622428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.634218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.634250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.646203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.646233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.658225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.658268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.670317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.670345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.681932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.681963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.693615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.693646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.705430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.705458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.717002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.717033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.729117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.729148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.742940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.779 [2024-11-02 14:26:20.742971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.779 [2024-11-02 14:26:20.753670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.753701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.765764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.765796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.777474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.777502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.789212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.789243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.800562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.800607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.812029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.812060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-11-02 14:26:20.823982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-11-02 14:26:20.824015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.835658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.835688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.847331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.847359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.858684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.858714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.870643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.870674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.882757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.882788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.894492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.894519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.906175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.906205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.917945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.917975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.929810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.929841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.941372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.941399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.953079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.953109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.964948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.964978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.977016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.977047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:20.988801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:20.988833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:21.000578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:21.000605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:21.012189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.038 [2024-11-02 14:26:21.012220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.038 [2024-11-02 14:26:21.023409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.023437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.039 [2024-11-02 14:26:21.034715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.034747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.039 [2024-11-02 14:26:21.048343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.048371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.039 [2024-11-02 14:26:21.059529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.059574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.039 [2024-11-02 14:26:21.070889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.070920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.039 [2024-11-02 14:26:21.082815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.039 [2024-11-02 14:26:21.082846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.297 [2024-11-02 14:26:21.094532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.297 [2024-11-02 14:26:21.094577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.297 [2024-11-02 14:26:21.106351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.297 [2024-11-02 14:26:21.106378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.297 [2024-11-02 14:26:21.118143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.297 [2024-11-02 14:26:21.118174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.297 [2024-11-02 14:26:21.130113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.297 [2024-11-02 14:26:21.130143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.297 [2024-11-02 14:26:21.141985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.297 [2024-11-02 14:26:21.142016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.153387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.153415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.165032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.165062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.176766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.176796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.188279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.188323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.200163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.200193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.212035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.212065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.223960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.223990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.235805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.235836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.247775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.247806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.259806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.259837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.272091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.272121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.283854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.283884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.295945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.295975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.307199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.307229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.318683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.318714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.330131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.330161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-11-02 14:26:21.342059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-11-02 14:26:21.342090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.355556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.355585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 10926.00 IOPS, 85.36 MiB/s [2024-11-02T13:26:21.612Z] [2024-11-02 14:26:21.365405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.365432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 00:10:29.557 Latency(us) 00:10:29.557 [2024-11-02T13:26:21.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.557 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:29.557 Nvme1n1 : 5.01 10928.54 85.38 0.00 0.00 11697.09 5097.24 22330.79 00:10:29.557 [2024-11-02T13:26:21.612Z] =================================================================================================================== 00:10:29.557 [2024-11-02T13:26:21.612Z] Total : 10928.54 85.38 0.00 0.00 11697.09 5097.24 22330.79 00:10:29.557 [2024-11-02 14:26:21.371121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.371150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.379142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.379171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.387184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.387222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.395244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.395322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.403251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.403306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.411286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.411334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.423331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.423392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.431341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.431391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.439362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.439413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.447377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.447426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.455398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.455446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.463464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.463517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.471449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.471499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.479465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.479514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.487483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.487545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.495505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.495552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.503530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.503579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.511518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.511571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.519515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.519551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.527585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.527628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.535622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.535669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.543654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.543701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.551642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.551669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.559647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.559675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.567728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.567779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.575739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.575783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.583716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.583741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.591734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.591758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 [2024-11-02 14:26:21.599756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.557 [2024-11-02 14:26:21.599781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1282128) - No such process 00:10:29.558 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1282128 00:10:29.558 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.558 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.558 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.815 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.816 delay0 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.816 14:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:29.816 [2024-11-02 14:26:21.761405] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:36.372 Initializing NVMe Controllers 00:10:36.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:36.372 Initialization complete. Launching workers. 00:10:36.372 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 220 00:10:36.372 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 507, failed to submit 33 00:10:36.372 success 335, unsuccessful 172, failed 0 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.372 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.372 rmmod nvme_tcp 00:10:36.372 rmmod nvme_fabrics 00:10:36.372 rmmod nvme_keyring 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 1280813 ']' 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 1280813 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1280813 ']' 00:10:36.372 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1280813 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1280813 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1280813' 00:10:36.373 killing process with pid 1280813 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1280813 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1280813 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.373 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.919 00:10:38.919 real 0m27.965s 00:10:38.919 user 0m41.067s 00:10:38.919 sys 0m8.394s 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.919 ************************************ 00:10:38.919 END TEST nvmf_zcopy 00:10:38.919 ************************************ 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.919 ************************************ 00:10:38.919 START TEST nvmf_nmic 00:10:38.919 ************************************ 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:38.919 * Looking for test storage... 00:10:38.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.919 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.920 --rc genhtml_branch_coverage=1 00:10:38.920 --rc genhtml_function_coverage=1 00:10:38.920 --rc genhtml_legend=1 00:10:38.920 --rc geninfo_all_blocks=1 00:10:38.920 --rc geninfo_unexecuted_blocks=1 00:10:38.920 00:10:38.920 ' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.920 --rc genhtml_branch_coverage=1 00:10:38.920 --rc genhtml_function_coverage=1 00:10:38.920 --rc genhtml_legend=1 00:10:38.920 --rc geninfo_all_blocks=1 00:10:38.920 --rc geninfo_unexecuted_blocks=1 00:10:38.920 00:10:38.920 ' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.920 --rc genhtml_branch_coverage=1 00:10:38.920 --rc genhtml_function_coverage=1 00:10:38.920 --rc genhtml_legend=1 00:10:38.920 --rc geninfo_all_blocks=1 00:10:38.920 --rc geninfo_unexecuted_blocks=1 00:10:38.920 00:10:38.920 ' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.920 --rc genhtml_branch_coverage=1 00:10:38.920 --rc genhtml_function_coverage=1 00:10:38.920 --rc genhtml_legend=1 00:10:38.920 --rc geninfo_all_blocks=1 00:10:38.920 --rc geninfo_unexecuted_blocks=1 00:10:38.920 00:10:38.920 ' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.920 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:40.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:40.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:40.826 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:40.826 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.826 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:10:40.827 00:10:40.827 --- 10.0.0.2 ping statistics --- 00:10:40.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.827 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:10:40.827 00:10:40.827 --- 10.0.0.1 ping statistics --- 00:10:40.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.827 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=1285524 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 1285524 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1285524 ']' 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.827 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.086 [2024-11-02 14:26:32.894204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:41.086 [2024-11-02 14:26:32.894300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.086 [2024-11-02 14:26:32.967717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.086 [2024-11-02 14:26:33.059155] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.086 [2024-11-02 14:26:33.059205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.086 [2024-11-02 14:26:33.059234] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.086 [2024-11-02 14:26:33.059245] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.086 [2024-11-02 14:26:33.059263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.086 [2024-11-02 14:26:33.059347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.086 [2024-11-02 14:26:33.059408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.086 [2024-11-02 14:26:33.059473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.086 [2024-11-02 14:26:33.059476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 [2024-11-02 14:26:33.220143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 Malloc0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 [2024-11-02 14:26:33.271755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:41.345 test case1: single bdev can't be used in multiple subsystems 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 [2024-11-02 14:26:33.295582] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:41.345 [2024-11-02 14:26:33.295613] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:41.345 [2024-11-02 14:26:33.295644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.345 request: 00:10:41.345 { 00:10:41.346 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:41.346 "namespace": { 00:10:41.346 "bdev_name": "Malloc0", 00:10:41.346 "no_auto_visible": false 00:10:41.346 }, 00:10:41.346 "method": "nvmf_subsystem_add_ns", 00:10:41.346 "req_id": 1 00:10:41.346 } 00:10:41.346 Got JSON-RPC error response 00:10:41.346 response: 00:10:41.346 { 00:10:41.346 "code": -32602, 00:10:41.346 "message": "Invalid parameters" 00:10:41.346 } 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:41.346 Adding namespace failed - expected result. 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:41.346 test case2: host connect to nvmf target in multiple paths 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.346 [2024-11-02 14:26:33.303684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.346 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.912 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:42.480 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.480 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.480 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.480 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:42.480 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:45.009 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:45.009 [global] 00:10:45.009 thread=1 00:10:45.009 invalidate=1 00:10:45.009 rw=write 00:10:45.009 time_based=1 00:10:45.009 runtime=1 00:10:45.009 ioengine=libaio 00:10:45.009 direct=1 00:10:45.009 bs=4096 00:10:45.009 iodepth=1 00:10:45.009 norandommap=0 00:10:45.009 numjobs=1 00:10:45.009 00:10:45.009 verify_dump=1 00:10:45.009 verify_backlog=512 00:10:45.009 verify_state_save=0 00:10:45.009 do_verify=1 00:10:45.009 verify=crc32c-intel 00:10:45.009 [job0] 00:10:45.009 filename=/dev/nvme0n1 00:10:45.009 Could not set queue depth (nvme0n1) 00:10:45.009 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.009 fio-3.35 00:10:45.009 Starting 1 thread 00:10:45.945 00:10:45.945 job0: (groupid=0, jobs=1): err= 0: pid=1286161: Sat Nov 2 14:26:37 2024 00:10:45.945 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:10:45.945 slat (nsec): min=6319, max=35046, avg=30317.91, stdev=8480.52 00:10:45.945 clat (usec): min=40880, max=42001, avg=41496.28, stdev=508.53 00:10:45.945 lat (usec): min=40915, max=42035, avg=41526.59, stdev=510.91 00:10:45.945 clat percentiles (usec): 00:10:45.945 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:45.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:45.945 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:45.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:45.945 | 99.99th=[42206] 00:10:45.945 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:45.945 slat (nsec): min=6316, max=32004, avg=7543.92, stdev=2392.77 00:10:45.945 clat (usec): min=159, max=330, avg=181.35, stdev=12.09 00:10:45.945 lat (usec): min=166, max=362, avg=188.90, stdev=12.75 00:10:45.945 clat percentiles (usec): 00:10:45.945 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:10:45.945 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:10:45.945 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 200], 00:10:45.945 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 330], 99.95th=[ 330], 00:10:45.945 | 99.99th=[ 330] 00:10:45.945 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.945 lat (usec) : 250=95.69%, 500=0.19% 00:10:45.945 lat (msec) : 50=4.12% 00:10:45.945 cpu : usr=0.30%, sys=0.30%, ctx=536, majf=0, minf=1 00:10:45.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.945 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.945 00:10:45.945 Run status group 0 (all jobs): 00:10:45.945 READ: bw=86.9KiB/s (89.0kB/s), 86.9KiB/s-86.9KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1013-1013msec 00:10:45.945 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:10:45.945 00:10:45.945 Disk stats (read/write): 00:10:45.945 nvme0n1: ios=46/512, merge=0/0, ticks=1752/89, in_queue=1841, util=98.50% 00:10:45.945 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.204 rmmod nvme_tcp 00:10:46.204 rmmod nvme_fabrics 00:10:46.204 rmmod nvme_keyring 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.204 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 1285524 ']' 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 1285524 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1285524 ']' 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1285524 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285524 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285524' 00:10:46.205 killing process with pid 1285524 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1285524 00:10:46.205 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1285524 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.465 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.007 00:10:49.007 real 0m9.993s 00:10:49.007 user 0m22.279s 00:10:49.007 sys 0m2.350s 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.007 ************************************ 00:10:49.007 END TEST nvmf_nmic 00:10:49.007 ************************************ 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.007 ************************************ 00:10:49.007 START TEST nvmf_fio_target 00:10:49.007 ************************************ 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.007 * Looking for test storage... 00:10:49.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.007 --rc genhtml_branch_coverage=1 00:10:49.007 --rc genhtml_function_coverage=1 00:10:49.007 --rc genhtml_legend=1 00:10:49.007 --rc geninfo_all_blocks=1 00:10:49.007 --rc geninfo_unexecuted_blocks=1 00:10:49.007 00:10:49.007 ' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.007 --rc genhtml_branch_coverage=1 00:10:49.007 --rc genhtml_function_coverage=1 00:10:49.007 --rc genhtml_legend=1 00:10:49.007 --rc geninfo_all_blocks=1 00:10:49.007 --rc geninfo_unexecuted_blocks=1 00:10:49.007 00:10:49.007 ' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.007 --rc genhtml_branch_coverage=1 00:10:49.007 --rc genhtml_function_coverage=1 00:10:49.007 --rc genhtml_legend=1 00:10:49.007 --rc geninfo_all_blocks=1 00:10:49.007 --rc geninfo_unexecuted_blocks=1 00:10:49.007 00:10:49.007 ' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.007 --rc genhtml_branch_coverage=1 00:10:49.007 --rc genhtml_function_coverage=1 00:10:49.007 --rc genhtml_legend=1 00:10:49.007 --rc geninfo_all_blocks=1 00:10:49.007 --rc geninfo_unexecuted_blocks=1 00:10:49.007 00:10:49.007 ' 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.007 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.008 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.913 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:50.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:50.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:50.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:50.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:10:50.914 00:10:50.914 --- 10.0.0.2 ping statistics --- 00:10:50.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.914 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:10:50.914 00:10:50.914 --- 10.0.0.1 ping statistics --- 00:10:50.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.914 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.914 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=1288244 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 1288244 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1288244 ']' 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.915 14:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.915 [2024-11-02 14:26:42.950100] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:50.915 [2024-11-02 14:26:42.950201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.173 [2024-11-02 14:26:43.032292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.173 [2024-11-02 14:26:43.140890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.173 [2024-11-02 14:26:43.140955] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.173 [2024-11-02 14:26:43.140997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.173 [2024-11-02 14:26:43.141020] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.173 [2024-11-02 14:26:43.141040] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.173 [2024-11-02 14:26:43.141174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.173 [2024-11-02 14:26:43.141286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.173 [2024-11-02 14:26:43.141318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.173 [2024-11-02 14:26:43.141328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.431 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.431 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:51.431 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:51.432 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.432 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.432 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.432 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.690 [2024-11-02 14:26:43.634598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.690 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.993 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:51.993 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.301 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:52.301 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.608 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:52.608 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.866 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:52.866 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:53.124 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.383 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:53.383 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.641 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:53.641 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.207 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:54.207 14:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:54.207 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.465 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.465 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.723 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.723 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.288 14:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.288 [2024-11-02 14:26:47.279658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.288 14:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:55.546 14:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:55.804 14:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:56.738 14:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:58.651 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.651 [global] 00:10:58.651 thread=1 00:10:58.651 invalidate=1 00:10:58.651 rw=write 00:10:58.651 time_based=1 00:10:58.651 runtime=1 00:10:58.651 ioengine=libaio 00:10:58.651 direct=1 00:10:58.651 bs=4096 00:10:58.651 iodepth=1 00:10:58.651 norandommap=0 00:10:58.651 numjobs=1 00:10:58.651 00:10:58.651 verify_dump=1 00:10:58.651 verify_backlog=512 00:10:58.651 verify_state_save=0 00:10:58.651 do_verify=1 00:10:58.651 verify=crc32c-intel 00:10:58.651 [job0] 00:10:58.651 filename=/dev/nvme0n1 00:10:58.651 [job1] 00:10:58.651 filename=/dev/nvme0n2 00:10:58.651 [job2] 00:10:58.651 filename=/dev/nvme0n3 00:10:58.651 [job3] 00:10:58.651 filename=/dev/nvme0n4 00:10:58.651 Could not set queue depth (nvme0n1) 00:10:58.651 Could not set queue depth (nvme0n2) 00:10:58.651 Could not set queue depth (nvme0n3) 00:10:58.651 Could not set queue depth (nvme0n4) 00:10:58.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.908 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.908 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.908 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.908 fio-3.35 00:10:58.908 Starting 4 threads 00:11:00.284 00:11:00.284 job0: (groupid=0, jobs=1): err= 0: pid=1289330: Sat Nov 2 14:26:52 2024 00:11:00.284 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:11:00.284 slat (nsec): min=9222, max=35503, avg=26868.71, stdev=9432.84 00:11:00.284 clat (usec): min=40898, max=42106, avg=41396.40, stdev=512.99 00:11:00.284 lat (usec): min=40932, max=42115, avg=41423.27, stdev=506.71 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:00.284 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:00.284 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:00.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.284 | 99.99th=[42206] 00:11:00.284 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:00.284 slat (nsec): min=6634, max=64781, avg=17778.49, stdev=9337.16 00:11:00.284 clat (usec): min=183, max=484, avg=252.39, stdev=44.70 00:11:00.284 lat (usec): min=204, max=501, avg=270.17, stdev=46.18 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:11:00.284 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:11:00.284 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 359], 00:11:00.284 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 486], 99.95th=[ 486], 00:11:00.284 | 99.99th=[ 486] 00:11:00.284 bw ( KiB/s): min= 4087, max= 4087, per=29.36%, avg=4087.00, stdev= 0.00, samples=1 00:11:00.284 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:00.284 lat (usec) : 250=58.16%, 500=37.90% 00:11:00.284 lat (msec) : 50=3.94% 00:11:00.284 cpu : usr=0.10%, sys=1.29%, ctx=534, majf=0, minf=1 00:11:00.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.284 job1: (groupid=0, jobs=1): err= 0: pid=1289332: Sat Nov 2 14:26:52 2024 00:11:00.284 read: IOPS=1693, BW=6773KiB/s (6935kB/s)(6976KiB/1030msec) 00:11:00.284 slat (nsec): min=5871, max=54750, avg=19017.26, stdev=8393.70 00:11:00.284 clat (usec): min=236, max=40793, avg=320.71, stdev=970.92 00:11:00.284 lat (usec): min=244, max=40810, avg=339.73, stdev=970.98 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:11:00.284 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:00.284 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 367], 00:11:00.284 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[40633], 00:11:00.284 | 99.99th=[40633] 00:11:00.284 write: IOPS=1988, BW=7953KiB/s (8144kB/s)(8192KiB/1030msec); 0 zone resets 00:11:00.284 slat (nsec): min=6409, max=46347, avg=14232.22, stdev=5239.13 00:11:00.284 clat (usec): min=156, max=718, avg=189.89, stdev=25.86 00:11:00.284 lat (usec): min=164, max=734, avg=204.13, stdev=27.39 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:11:00.284 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:11:00.284 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 231], 00:11:00.284 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 416], 99.95th=[ 545], 00:11:00.284 | 99.99th=[ 717] 00:11:00.284 bw ( KiB/s): min= 8175, max= 8192, per=58.79%, avg=8183.50, stdev=12.02, samples=2 00:11:00.284 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:11:00.284 lat (usec) : 250=53.64%, 500=45.94%, 750=0.40% 00:11:00.284 lat (msec) : 50=0.03% 00:11:00.284 cpu : usr=3.69%, sys=6.12%, ctx=3792, majf=0, minf=1 00:11:00.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 issued rwts: total=1744,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.284 job2: (groupid=0, jobs=1): err= 0: pid=1289333: Sat Nov 2 14:26:52 2024 00:11:00.284 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:11:00.284 slat (nsec): min=8666, max=34658, avg=26279.90, stdev=9294.34 00:11:00.284 clat (usec): min=40897, max=42007, avg=41304.40, stdev=493.90 00:11:00.284 lat (usec): min=40931, max=42023, avg=41330.68, stdev=488.32 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:00.284 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.284 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:00.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.284 | 99.99th=[42206] 00:11:00.284 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:00.284 slat (nsec): min=7123, max=88591, avg=18450.17, stdev=10531.14 00:11:00.284 clat (usec): min=193, max=759, avg=255.58, stdev=47.43 00:11:00.284 lat (usec): min=205, max=770, avg=274.03, stdev=49.69 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:11:00.284 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:11:00.284 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 359], 00:11:00.284 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 758], 99.95th=[ 758], 00:11:00.284 | 99.99th=[ 758] 00:11:00.284 bw ( KiB/s): min= 4087, max= 4087, per=29.36%, avg=4087.00, stdev= 0.00, samples=1 00:11:00.284 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:00.284 lat (usec) : 250=56.10%, 500=39.77%, 1000=0.19% 00:11:00.284 lat (msec) : 50=3.94% 00:11:00.284 cpu : usr=0.30%, sys=1.09%, ctx=533, majf=0, minf=1 00:11:00.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.284 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.284 job3: (groupid=0, jobs=1): err= 0: pid=1289334: Sat Nov 2 14:26:52 2024 00:11:00.284 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:11:00.284 slat (nsec): min=15179, max=34223, avg=26910.60, stdev=8134.34 00:11:00.284 clat (usec): min=40911, max=42044, avg=41800.43, stdev=362.47 00:11:00.284 lat (usec): min=40944, max=42059, avg=41827.34, stdev=359.68 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:00.284 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:00.284 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:00.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.284 | 99.99th=[42206] 00:11:00.284 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:00.284 slat (nsec): min=6899, max=74262, avg=22564.19, stdev=11133.17 00:11:00.284 clat (usec): min=209, max=458, avg=296.11, stdev=51.94 00:11:00.284 lat (usec): min=225, max=483, avg=318.68, stdev=53.83 00:11:00.284 clat percentiles (usec): 00:11:00.284 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 247], 00:11:00.284 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 306], 00:11:00.284 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 371], 95.00th=[ 392], 00:11:00.284 | 99.00th=[ 416], 99.50th=[ 453], 99.90th=[ 461], 99.95th=[ 461], 00:11:00.285 | 99.99th=[ 461] 00:11:00.285 bw ( KiB/s): min= 4087, max= 4087, per=29.36%, avg=4087.00, stdev= 0.00, samples=1 00:11:00.285 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:00.285 lat (usec) : 250=21.43%, 500=74.81% 00:11:00.285 lat (msec) : 50=3.76% 00:11:00.285 cpu : usr=1.00%, sys=0.70%, ctx=532, majf=0, minf=1 00:11:00.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.285 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.285 00:11:00.285 Run status group 0 (all jobs): 00:11:00.285 READ: bw=7014KiB/s (7182kB/s), 79.8KiB/s-6773KiB/s (81.7kB/s-6935kB/s), io=7224KiB (7397kB), run=1003-1030msec 00:11:00.285 WRITE: bw=13.6MiB/s (14.3MB/s), 2026KiB/s-7953KiB/s (2074kB/s-8144kB/s), io=14.0MiB (14.7MB), run=1003-1030msec 00:11:00.285 00:11:00.285 Disk stats (read/write): 00:11:00.285 nvme0n1: ios=69/512, merge=0/0, ticks=1193/127, in_queue=1320, util=97.80% 00:11:00.285 nvme0n2: ios=1565/1688, merge=0/0, ticks=482/304, in_queue=786, util=87.89% 00:11:00.285 nvme0n3: ios=38/512, merge=0/0, ticks=903/118, in_queue=1021, util=91.74% 00:11:00.285 nvme0n4: ios=16/512, merge=0/0, ticks=671/138, in_queue=809, util=89.67% 00:11:00.285 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:00.285 [global] 00:11:00.285 thread=1 00:11:00.285 invalidate=1 00:11:00.285 rw=randwrite 00:11:00.285 time_based=1 00:11:00.285 runtime=1 00:11:00.285 ioengine=libaio 00:11:00.285 direct=1 00:11:00.285 bs=4096 00:11:00.285 iodepth=1 00:11:00.285 norandommap=0 00:11:00.285 numjobs=1 00:11:00.285 00:11:00.285 verify_dump=1 00:11:00.285 verify_backlog=512 00:11:00.285 verify_state_save=0 00:11:00.285 do_verify=1 00:11:00.285 verify=crc32c-intel 00:11:00.285 [job0] 00:11:00.285 filename=/dev/nvme0n1 00:11:00.285 [job1] 00:11:00.285 filename=/dev/nvme0n2 00:11:00.285 [job2] 00:11:00.285 filename=/dev/nvme0n3 00:11:00.285 [job3] 00:11:00.285 filename=/dev/nvme0n4 00:11:00.285 Could not set queue depth (nvme0n1) 00:11:00.285 Could not set queue depth (nvme0n2) 00:11:00.285 Could not set queue depth (nvme0n3) 00:11:00.285 Could not set queue depth (nvme0n4) 00:11:00.285 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.285 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.285 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.285 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.285 fio-3.35 00:11:00.285 Starting 4 threads 00:11:01.659 00:11:01.659 job0: (groupid=0, jobs=1): err= 0: pid=1289567: Sat Nov 2 14:26:53 2024 00:11:01.659 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:11:01.659 slat (nsec): min=12353, max=37321, avg=23378.38, stdev=9846.81 00:11:01.659 clat (usec): min=3883, max=42034, avg=40134.66, stdev=8306.85 00:11:01.659 lat (usec): min=3899, max=42050, avg=40158.04, stdev=8308.47 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 3884], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:01.659 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:01.659 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:01.659 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.659 | 99.99th=[42206] 00:11:01.659 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:01.659 slat (nsec): min=8347, max=84699, avg=15996.97, stdev=7486.03 00:11:01.659 clat (usec): min=184, max=476, avg=287.57, stdev=56.71 00:11:01.659 lat (usec): min=192, max=497, avg=303.57, stdev=59.41 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 190], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 241], 00:11:01.659 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 297], 00:11:01.659 | 70.00th=[ 322], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 379], 00:11:01.659 | 99.00th=[ 416], 99.50th=[ 465], 99.90th=[ 478], 99.95th=[ 478], 00:11:01.659 | 99.99th=[ 478] 00:11:01.659 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.659 lat (usec) : 250=28.33%, 500=67.73% 00:11:01.659 lat (msec) : 4=0.19%, 50=3.75% 00:11:01.659 cpu : usr=0.50%, sys=1.10%, ctx=534, majf=0, minf=1 00:11:01.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.659 job1: (groupid=0, jobs=1): err= 0: pid=1289568: Sat Nov 2 14:26:53 2024 00:11:01.659 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:11:01.659 slat (nsec): min=15552, max=34423, avg=23040.38, stdev=8727.50 00:11:01.659 clat (usec): min=40778, max=42049, avg=41865.70, stdev=327.73 00:11:01.659 lat (usec): min=40794, max=42067, avg=41888.74, stdev=329.63 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:11:01.659 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:01.659 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:01.659 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.659 | 99.99th=[42206] 00:11:01.659 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:11:01.659 slat (nsec): min=6452, max=54738, avg=14947.83, stdev=7966.08 00:11:01.659 clat (usec): min=194, max=860, avg=275.32, stdev=61.81 00:11:01.659 lat (usec): min=209, max=877, avg=290.27, stdev=63.18 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 237], 00:11:01.659 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 265], 00:11:01.659 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 375], 00:11:01.659 | 99.00th=[ 437], 99.50th=[ 603], 99.90th=[ 865], 99.95th=[ 865], 00:11:01.659 | 99.99th=[ 865] 00:11:01.659 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.659 lat (usec) : 250=47.47%, 500=47.84%, 750=0.56%, 1000=0.19% 00:11:01.659 lat (msec) : 50=3.94% 00:11:01.659 cpu : usr=0.19%, sys=0.97%, ctx=535, majf=0, minf=1 00:11:01.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.659 job2: (groupid=0, jobs=1): err= 0: pid=1289569: Sat Nov 2 14:26:53 2024 00:11:01.659 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:11:01.659 slat (nsec): min=15386, max=37221, avg=24243.59, stdev=9669.49 00:11:01.659 clat (usec): min=432, max=42437, avg=38721.42, stdev=10526.46 00:11:01.659 lat (usec): min=451, max=42453, avg=38745.66, stdev=10528.55 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 433], 5.00th=[13173], 10.00th=[41157], 20.00th=[41681], 00:11:01.659 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:01.659 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:01.659 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.659 | 99.99th=[42206] 00:11:01.659 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:01.659 slat (nsec): min=8170, max=47935, avg=15228.61, stdev=6018.25 00:11:01.659 clat (usec): min=212, max=470, avg=273.07, stdev=42.51 00:11:01.659 lat (usec): min=227, max=480, avg=288.30, stdev=43.22 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:11:01.659 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:11:01.659 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 367], 00:11:01.659 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 469], 00:11:01.659 | 99.99th=[ 469] 00:11:01.659 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.659 lat (usec) : 250=36.70%, 500=59.36% 00:11:01.659 lat (msec) : 20=0.19%, 50=3.75% 00:11:01.659 cpu : usr=0.60%, sys=1.00%, ctx=535, majf=0, minf=1 00:11:01.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.659 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.659 job3: (groupid=0, jobs=1): err= 0: pid=1289570: Sat Nov 2 14:26:53 2024 00:11:01.659 read: IOPS=1422, BW=5690KiB/s (5827kB/s)(5696KiB/1001msec) 00:11:01.659 slat (nsec): min=7060, max=55014, avg=12615.37, stdev=5651.46 00:11:01.659 clat (usec): min=344, max=545, avg=390.05, stdev=28.06 00:11:01.659 lat (usec): min=352, max=566, avg=402.67, stdev=31.79 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:11:01.659 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 400], 00:11:01.659 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 420], 95.00th=[ 429], 00:11:01.659 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 545], 00:11:01.659 | 99.99th=[ 545] 00:11:01.659 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:01.659 slat (nsec): min=8169, max=64028, avg=17695.12, stdev=7242.34 00:11:01.659 clat (usec): min=188, max=1414, avg=251.77, stdev=57.45 00:11:01.659 lat (usec): min=197, max=1454, avg=269.46, stdev=59.52 00:11:01.659 clat percentiles (usec): 00:11:01.659 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 219], 00:11:01.659 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:11:01.659 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 334], 00:11:01.659 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 906], 99.95th=[ 1418], 00:11:01.659 | 99.99th=[ 1418] 00:11:01.659 bw ( KiB/s): min= 8064, max= 8064, per=67.66%, avg=8064.00, stdev= 0.00, samples=1 00:11:01.659 iops : min= 2016, max= 2016, avg=2016.00, stdev= 0.00, samples=1 00:11:01.660 lat (usec) : 250=32.70%, 500=66.82%, 750=0.41%, 1000=0.03% 00:11:01.660 lat (msec) : 2=0.03% 00:11:01.660 cpu : usr=3.10%, sys=6.40%, ctx=2961, majf=0, minf=1 00:11:01.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.660 issued rwts: total=1424,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.660 00:11:01.660 Run status group 0 (all jobs): 00:11:01.660 READ: bw=5773KiB/s (5912kB/s), 81.5KiB/s-5690KiB/s (83.4kB/s-5827kB/s), io=5952KiB (6095kB), run=1001-1031msec 00:11:01.660 WRITE: bw=11.6MiB/s (12.2MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:11:01.660 00:11:01.660 Disk stats (read/write): 00:11:01.660 nvme0n1: ios=69/512, merge=0/0, ticks=855/143, in_queue=998, util=97.80% 00:11:01.660 nvme0n2: ios=67/512, merge=0/0, ticks=889/137, in_queue=1026, util=98.17% 00:11:01.660 nvme0n3: ios=42/512, merge=0/0, ticks=1632/136, in_queue=1768, util=97.91% 00:11:01.660 nvme0n4: ios=1081/1531, merge=0/0, ticks=582/365, in_queue=947, util=97.89% 00:11:01.660 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:01.660 [global] 00:11:01.660 thread=1 00:11:01.660 invalidate=1 00:11:01.660 rw=write 00:11:01.660 time_based=1 00:11:01.660 runtime=1 00:11:01.660 ioengine=libaio 00:11:01.660 direct=1 00:11:01.660 bs=4096 00:11:01.660 iodepth=128 00:11:01.660 norandommap=0 00:11:01.660 numjobs=1 00:11:01.660 00:11:01.660 verify_dump=1 00:11:01.660 verify_backlog=512 00:11:01.660 verify_state_save=0 00:11:01.660 do_verify=1 00:11:01.660 verify=crc32c-intel 00:11:01.660 [job0] 00:11:01.660 filename=/dev/nvme0n1 00:11:01.660 [job1] 00:11:01.660 filename=/dev/nvme0n2 00:11:01.660 [job2] 00:11:01.660 filename=/dev/nvme0n3 00:11:01.660 [job3] 00:11:01.660 filename=/dev/nvme0n4 00:11:01.660 Could not set queue depth (nvme0n1) 00:11:01.660 Could not set queue depth (nvme0n2) 00:11:01.660 Could not set queue depth (nvme0n3) 00:11:01.660 Could not set queue depth (nvme0n4) 00:11:01.660 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.660 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.660 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.660 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.660 fio-3.35 00:11:01.660 Starting 4 threads 00:11:03.041 00:11:03.041 job0: (groupid=0, jobs=1): err= 0: pid=1289913: Sat Nov 2 14:26:54 2024 00:11:03.041 read: IOPS=2216, BW=8868KiB/s (9080kB/s)(8912KiB/1005msec) 00:11:03.041 slat (usec): min=2, max=26503, avg=199.95, stdev=1489.05 00:11:03.041 clat (usec): min=1975, max=76133, avg=26306.72, stdev=12680.79 00:11:03.041 lat (usec): min=1981, max=76139, avg=26506.67, stdev=12729.35 00:11:03.041 clat percentiles (usec): 00:11:03.041 | 1.00th=[ 4424], 5.00th=[11076], 10.00th=[14877], 20.00th=[16909], 00:11:03.041 | 30.00th=[18482], 40.00th=[19268], 50.00th=[21627], 60.00th=[26346], 00:11:03.041 | 70.00th=[32900], 80.00th=[35914], 90.00th=[44303], 95.00th=[52691], 00:11:03.041 | 99.00th=[66847], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:11:03.041 | 99.99th=[76022] 00:11:03.041 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:11:03.041 slat (usec): min=4, max=74225, avg=204.31, stdev=2180.23 00:11:03.041 clat (msec): min=4, max=157, avg=24.62, stdev=24.57 00:11:03.041 lat (msec): min=4, max=157, avg=24.82, stdev=24.73 00:11:03.041 clat percentiles (msec): 00:11:03.041 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 14], 00:11:03.041 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:11:03.041 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 51], 95.00th=[ 58], 00:11:03.041 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:11:03.041 | 99.99th=[ 159] 00:11:03.041 bw ( KiB/s): min= 9392, max=11088, per=17.18%, avg=10240.00, stdev=1199.25, samples=2 00:11:03.041 iops : min= 2348, max= 2772, avg=2560.00, stdev=299.81, samples=2 00:11:03.041 lat (msec) : 2=0.21%, 4=0.02%, 10=5.26%, 20=49.90%, 50=36.57% 00:11:03.041 lat (msec) : 100=6.73%, 250=1.32% 00:11:03.041 cpu : usr=2.19%, sys=3.39%, ctx=178, majf=0, minf=1 00:11:03.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:03.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.041 issued rwts: total=2228,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.041 job1: (groupid=0, jobs=1): err= 0: pid=1289914: Sat Nov 2 14:26:54 2024 00:11:03.041 read: IOPS=4016, BW=15.7MiB/s (16.5MB/s)(15.7MiB/1002msec) 00:11:03.041 slat (usec): min=2, max=26016, avg=123.36, stdev=983.73 00:11:03.041 clat (usec): min=1518, max=66584, avg=16130.76, stdev=8914.93 00:11:03.041 lat (usec): min=2057, max=71181, avg=16254.12, stdev=8992.01 00:11:03.041 clat percentiles (usec): 00:11:03.041 | 1.00th=[ 4113], 5.00th=[ 5800], 10.00th=[ 8717], 20.00th=[10683], 00:11:03.041 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13829], 60.00th=[15270], 00:11:03.041 | 70.00th=[17171], 80.00th=[20841], 90.00th=[25035], 95.00th=[34866], 00:11:03.041 | 99.00th=[54789], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:11:03.041 | 99.99th=[66847] 00:11:03.041 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:03.041 slat (usec): min=3, max=19613, avg=102.41, stdev=772.08 00:11:03.041 clat (usec): min=912, max=50945, avg=15208.65, stdev=8230.27 00:11:03.041 lat (usec): min=930, max=50960, avg=15311.06, stdev=8291.16 00:11:03.041 clat percentiles (usec): 00:11:03.041 | 1.00th=[ 2376], 5.00th=[ 4178], 10.00th=[ 5932], 20.00th=[ 9241], 00:11:03.041 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12780], 60.00th=[13960], 00:11:03.041 | 70.00th=[17171], 80.00th=[22414], 90.00th=[26084], 95.00th=[31065], 00:11:03.041 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:11:03.041 | 99.99th=[51119] 00:11:03.041 bw ( KiB/s): min=16384, max=16384, per=27.48%, avg=16384.00, stdev= 0.00, samples=2 00:11:03.041 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:03.041 lat (usec) : 1000=0.02% 00:11:03.041 lat (msec) : 2=0.21%, 4=2.35%, 10=15.60%, 20=58.51%, 50=21.88% 00:11:03.041 lat (msec) : 100=1.42% 00:11:03.041 cpu : usr=4.40%, sys=6.39%, ctx=291, majf=0, minf=1 00:11:03.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:03.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.041 issued rwts: total=4025,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.041 job2: (groupid=0, jobs=1): err= 0: pid=1289915: Sat Nov 2 14:26:54 2024 00:11:03.041 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:11:03.041 slat (usec): min=2, max=16190, avg=130.79, stdev=921.19 00:11:03.041 clat (usec): min=5165, max=41049, avg=16863.78, stdev=6332.50 00:11:03.041 lat (usec): min=5170, max=41065, avg=16994.56, stdev=6380.36 00:11:03.041 clat percentiles (usec): 00:11:03.041 | 1.00th=[ 6849], 5.00th=[10421], 10.00th=[11469], 20.00th=[12256], 00:11:03.041 | 30.00th=[12649], 40.00th=[13042], 50.00th=[14877], 60.00th=[16319], 00:11:03.041 | 70.00th=[19268], 80.00th=[21890], 90.00th=[24773], 95.00th=[32375], 00:11:03.041 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:11:03.041 | 99.99th=[41157] 00:11:03.041 write: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1002msec); 0 zone resets 00:11:03.041 slat (usec): min=3, max=14659, avg=95.43, stdev=579.46 00:11:03.041 clat (usec): min=1205, max=41071, avg=13680.24, stdev=4457.89 00:11:03.041 lat (usec): min=1216, max=41093, avg=13775.67, stdev=4485.28 00:11:03.041 clat percentiles (usec): 00:11:03.041 | 1.00th=[ 4113], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[10421], 00:11:03.041 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13566], 60.00th=[13829], 00:11:03.041 | 70.00th=[14484], 80.00th=[17171], 90.00th=[19268], 95.00th=[21365], 00:11:03.041 | 99.00th=[27919], 99.50th=[28443], 99.90th=[30278], 99.95th=[35914], 00:11:03.041 | 99.99th=[41157] 00:11:03.041 bw ( KiB/s): min=15840, max=17072, per=27.60%, avg=16456.00, stdev=871.16, samples=2 00:11:03.041 iops : min= 3960, max= 4268, avg=4114.00, stdev=217.79, samples=2 00:11:03.042 lat (msec) : 2=0.10%, 4=0.31%, 10=9.82%, 20=71.79%, 50=17.98% 00:11:03.042 cpu : usr=4.40%, sys=9.59%, ctx=405, majf=0, minf=1 00:11:03.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:03.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.042 issued rwts: total=4096,4241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.042 job3: (groupid=0, jobs=1): err= 0: pid=1289916: Sat Nov 2 14:26:54 2024 00:11:03.042 read: IOPS=3679, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec) 00:11:03.042 slat (usec): min=2, max=15378, avg=116.67, stdev=830.45 00:11:03.042 clat (usec): min=1327, max=39991, avg=15784.20, stdev=5316.48 00:11:03.042 lat (usec): min=5205, max=40005, avg=15900.88, stdev=5369.70 00:11:03.042 clat percentiles (usec): 00:11:03.042 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11731], 00:11:03.042 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14484], 60.00th=[15664], 00:11:03.042 | 70.00th=[16909], 80.00th=[19530], 90.00th=[24511], 95.00th=[27395], 00:11:03.042 | 99.00th=[31851], 99.50th=[31851], 99.90th=[35390], 99.95th=[36963], 00:11:03.042 | 99.99th=[40109] 00:11:03.042 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:11:03.042 slat (usec): min=4, max=40268, avg=128.17, stdev=1024.30 00:11:03.042 clat (usec): min=2222, max=46126, avg=15563.64, stdev=6908.47 00:11:03.042 lat (usec): min=2255, max=46144, avg=15691.81, stdev=6966.27 00:11:03.042 clat percentiles (usec): 00:11:03.042 | 1.00th=[ 6063], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 9241], 00:11:03.042 | 30.00th=[11731], 40.00th=[12911], 50.00th=[13566], 60.00th=[15926], 00:11:03.042 | 70.00th=[17433], 80.00th=[20579], 90.00th=[26608], 95.00th=[29230], 00:11:03.042 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[42730], 00:11:03.042 | 99.99th=[45876] 00:11:03.042 bw ( KiB/s): min=15656, max=17032, per=27.42%, avg=16344.00, stdev=972.98, samples=2 00:11:03.042 iops : min= 3914, max= 4258, avg=4086.00, stdev=243.24, samples=2 00:11:03.042 lat (msec) : 2=0.01%, 4=0.03%, 10=16.21%, 20=62.37%, 50=21.38% 00:11:03.042 cpu : usr=5.07%, sys=9.55%, ctx=274, majf=0, minf=1 00:11:03.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:03.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.042 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.042 00:11:03.042 Run status group 0 (all jobs): 00:11:03.042 READ: bw=54.6MiB/s (57.2MB/s), 8868KiB/s-16.0MiB/s (9080kB/s-16.7MB/s), io=54.9MiB (57.6MB), run=1002-1006msec 00:11:03.042 WRITE: bw=58.2MiB/s (61.0MB/s), 9.95MiB/s-16.5MiB/s (10.4MB/s-17.3MB/s), io=58.6MiB (61.4MB), run=1002-1006msec 00:11:03.042 00:11:03.042 Disk stats (read/write): 00:11:03.042 nvme0n1: ios=1842/2048, merge=0/0, ticks=30970/36227, in_queue=67197, util=97.19% 00:11:03.042 nvme0n2: ios=3596/3599, merge=0/0, ticks=31607/28505, in_queue=60112, util=85.66% 00:11:03.042 nvme0n3: ios=3584/3879, merge=0/0, ticks=53311/46749, in_queue=100060, util=88.90% 00:11:03.042 nvme0n4: ios=3184/3584, merge=0/0, ticks=43455/40048, in_queue=83503, util=97.68% 00:11:03.042 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:03.042 [global] 00:11:03.042 thread=1 00:11:03.042 invalidate=1 00:11:03.042 rw=randwrite 00:11:03.042 time_based=1 00:11:03.042 runtime=1 00:11:03.042 ioengine=libaio 00:11:03.042 direct=1 00:11:03.042 bs=4096 00:11:03.042 iodepth=128 00:11:03.042 norandommap=0 00:11:03.042 numjobs=1 00:11:03.042 00:11:03.042 verify_dump=1 00:11:03.042 verify_backlog=512 00:11:03.042 verify_state_save=0 00:11:03.042 do_verify=1 00:11:03.042 verify=crc32c-intel 00:11:03.042 [job0] 00:11:03.042 filename=/dev/nvme0n1 00:11:03.042 [job1] 00:11:03.042 filename=/dev/nvme0n2 00:11:03.042 [job2] 00:11:03.042 filename=/dev/nvme0n3 00:11:03.042 [job3] 00:11:03.042 filename=/dev/nvme0n4 00:11:03.042 Could not set queue depth (nvme0n1) 00:11:03.042 Could not set queue depth (nvme0n2) 00:11:03.042 Could not set queue depth (nvme0n3) 00:11:03.042 Could not set queue depth (nvme0n4) 00:11:03.301 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.301 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.301 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.301 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.301 fio-3.35 00:11:03.301 Starting 4 threads 00:11:04.677 00:11:04.677 job0: (groupid=0, jobs=1): err= 0: pid=1290153: Sat Nov 2 14:26:56 2024 00:11:04.677 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:04.677 slat (usec): min=2, max=12524, avg=113.63, stdev=774.28 00:11:04.677 clat (usec): min=3499, max=42034, avg=14838.50, stdev=5291.32 00:11:04.677 lat (usec): min=3510, max=42038, avg=14952.12, stdev=5317.09 00:11:04.677 clat percentiles (usec): 00:11:04.677 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[11076], 00:11:04.677 | 30.00th=[11863], 40.00th=[13173], 50.00th=[13829], 60.00th=[15795], 00:11:04.677 | 70.00th=[16909], 80.00th=[19268], 90.00th=[20841], 95.00th=[23462], 00:11:04.677 | 99.00th=[36439], 99.50th=[36439], 99.90th=[42206], 99.95th=[42206], 00:11:04.677 | 99.99th=[42206] 00:11:04.677 write: IOPS=4093, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:11:04.677 slat (usec): min=3, max=14981, avg=120.52, stdev=753.32 00:11:04.677 clat (usec): min=2231, max=54773, avg=16185.61, stdev=10814.43 00:11:04.677 lat (usec): min=2241, max=54793, avg=16306.13, stdev=10887.69 00:11:04.677 clat percentiles (usec): 00:11:04.677 | 1.00th=[ 4080], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[10421], 00:11:04.677 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:11:04.677 | 70.00th=[14877], 80.00th=[17171], 90.00th=[40109], 95.00th=[45351], 00:11:04.677 | 99.00th=[50070], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:11:04.677 | 99.99th=[54789] 00:11:04.677 bw ( KiB/s): min=12288, max=20480, per=28.12%, avg=16384.00, stdev=5792.62, samples=2 00:11:04.677 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:04.677 lat (msec) : 4=0.44%, 10=15.33%, 20=69.40%, 50=14.38%, 100=0.45% 00:11:04.677 cpu : usr=3.59%, sys=6.38%, ctx=330, majf=0, minf=1 00:11:04.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:04.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.677 issued rwts: total=4096,4110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.677 job1: (groupid=0, jobs=1): err= 0: pid=1290154: Sat Nov 2 14:26:56 2024 00:11:04.677 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:11:04.677 slat (usec): min=2, max=16414, avg=133.85, stdev=972.37 00:11:04.677 clat (usec): min=3912, max=39629, avg=18393.43, stdev=5701.21 00:11:04.678 lat (usec): min=7912, max=39671, avg=18527.28, stdev=5766.84 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 7963], 5.00th=[11469], 10.00th=[12780], 20.00th=[13698], 00:11:04.678 | 30.00th=[14353], 40.00th=[15926], 50.00th=[17695], 60.00th=[18482], 00:11:04.678 | 70.00th=[20579], 80.00th=[22414], 90.00th=[26608], 95.00th=[28705], 00:11:04.678 | 99.00th=[37487], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:11:04.678 | 99.99th=[39584] 00:11:04.678 write: IOPS=3454, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1007msec); 0 zone resets 00:11:04.678 slat (usec): min=3, max=25616, avg=156.48, stdev=1003.56 00:11:04.678 clat (usec): min=922, max=54952, avg=20460.35, stdev=9853.24 00:11:04.678 lat (usec): min=932, max=54976, avg=20616.83, stdev=9922.65 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 8029], 5.00th=[10159], 10.00th=[11076], 20.00th=[12518], 00:11:04.678 | 30.00th=[14353], 40.00th=[15401], 50.00th=[17695], 60.00th=[20317], 00:11:04.678 | 70.00th=[22676], 80.00th=[26084], 90.00th=[35390], 95.00th=[44827], 00:11:04.678 | 99.00th=[48497], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:11:04.678 | 99.99th=[54789] 00:11:04.678 bw ( KiB/s): min=12360, max=14456, per=23.01%, avg=13408.00, stdev=1482.10, samples=2 00:11:04.678 iops : min= 3090, max= 3614, avg=3352.00, stdev=370.52, samples=2 00:11:04.678 lat (usec) : 1000=0.03% 00:11:04.678 lat (msec) : 4=0.26%, 10=3.14%, 20=58.11%, 50=38.24%, 100=0.21% 00:11:04.678 cpu : usr=3.38%, sys=6.76%, ctx=262, majf=0, minf=2 00:11:04.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.678 issued rwts: total=3072,3479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.678 job2: (groupid=0, jobs=1): err= 0: pid=1290155: Sat Nov 2 14:26:56 2024 00:11:04.678 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:04.678 slat (usec): min=3, max=22808, avg=174.77, stdev=1234.90 00:11:04.678 clat (usec): min=7367, max=63316, avg=20555.75, stdev=12703.42 00:11:04.678 lat (usec): min=7760, max=63337, avg=20730.52, stdev=12802.35 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 8455], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:11:04.678 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[14091], 00:11:04.678 | 70.00th=[23987], 80.00th=[33162], 90.00th=[40633], 95.00th=[46924], 00:11:04.678 | 99.00th=[55837], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:11:04.678 | 99.99th=[63177] 00:11:04.678 write: IOPS=3434, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1006msec); 0 zone resets 00:11:04.678 slat (usec): min=4, max=16105, avg=123.51, stdev=745.89 00:11:04.678 clat (usec): min=5437, max=53604, avg=18569.89, stdev=11319.11 00:11:04.678 lat (usec): min=6121, max=53621, avg=18693.40, stdev=11362.53 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 7504], 5.00th=[10814], 10.00th=[11469], 20.00th=[11731], 00:11:04.678 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13042], 60.00th=[13698], 00:11:04.678 | 70.00th=[15926], 80.00th=[28705], 90.00th=[36439], 95.00th=[45351], 00:11:04.678 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:11:04.678 | 99.99th=[53740] 00:11:04.678 bw ( KiB/s): min=12288, max=14336, per=22.85%, avg=13312.00, stdev=1448.15, samples=2 00:11:04.678 iops : min= 3072, max= 3584, avg=3328.00, stdev=362.04, samples=2 00:11:04.678 lat (msec) : 10=3.72%, 20=66.63%, 50=26.60%, 100=3.05% 00:11:04.678 cpu : usr=4.88%, sys=7.06%, ctx=343, majf=0, minf=1 00:11:04.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.678 issued rwts: total=3072,3455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.678 job3: (groupid=0, jobs=1): err= 0: pid=1290156: Sat Nov 2 14:26:56 2024 00:11:04.678 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:04.678 slat (usec): min=2, max=16694, avg=141.94, stdev=1033.42 00:11:04.678 clat (usec): min=3829, max=56196, avg=20150.14, stdev=9103.31 00:11:04.678 lat (usec): min=3833, max=56200, avg=20292.09, stdev=9169.13 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 5800], 5.00th=[ 9372], 10.00th=[11600], 20.00th=[14353], 00:11:04.678 | 30.00th=[15401], 40.00th=[16319], 50.00th=[17433], 60.00th=[17957], 00:11:04.678 | 70.00th=[20841], 80.00th=[25822], 90.00th=[35390], 95.00th=[39584], 00:11:04.678 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50070], 99.95th=[51119], 00:11:04.678 | 99.99th=[56361] 00:11:04.678 write: IOPS=3610, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1004msec); 0 zone resets 00:11:04.678 slat (usec): min=3, max=12885, avg=105.37, stdev=820.66 00:11:04.678 clat (usec): min=438, max=50708, avg=15202.11, stdev=8146.40 00:11:04.678 lat (usec): min=481, max=50715, avg=15307.49, stdev=8173.11 00:11:04.678 clat percentiles (usec): 00:11:04.678 | 1.00th=[ 2606], 5.00th=[ 5604], 10.00th=[ 6652], 20.00th=[ 8225], 00:11:04.678 | 30.00th=[11994], 40.00th=[12911], 50.00th=[14222], 60.00th=[15139], 00:11:04.678 | 70.00th=[16450], 80.00th=[18744], 90.00th=[24249], 95.00th=[34866], 00:11:04.678 | 99.00th=[42730], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:11:04.678 | 99.99th=[50594] 00:11:04.678 bw ( KiB/s): min=12352, max=16320, per=24.60%, avg=14336.00, stdev=2805.80, samples=2 00:11:04.678 iops : min= 3088, max= 4080, avg=3584.00, stdev=701.45, samples=2 00:11:04.678 lat (usec) : 500=0.01% 00:11:04.678 lat (msec) : 2=0.33%, 4=1.46%, 10=13.23%, 20=59.51%, 50=25.02% 00:11:04.678 lat (msec) : 100=0.43% 00:11:04.678 cpu : usr=3.09%, sys=5.18%, ctx=222, majf=0, minf=1 00:11:04.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.678 issued rwts: total=3584,3625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.678 00:11:04.678 Run status group 0 (all jobs): 00:11:04.678 READ: bw=53.6MiB/s (56.2MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=54.0MiB (56.6MB), run=1004-1007msec 00:11:04.678 WRITE: bw=56.9MiB/s (59.7MB/s), 13.4MiB/s-16.0MiB/s (14.1MB/s-16.8MB/s), io=57.3MiB (60.1MB), run=1004-1007msec 00:11:04.678 00:11:04.678 Disk stats (read/write): 00:11:04.678 nvme0n1: ios=3636/3871, merge=0/0, ticks=32510/31921, in_queue=64431, util=97.80% 00:11:04.678 nvme0n2: ios=2921/3072, merge=0/0, ticks=30106/31292, in_queue=61398, util=98.07% 00:11:04.678 nvme0n3: ios=2318/2560, merge=0/0, ticks=27000/24604, in_queue=51604, util=88.96% 00:11:04.678 nvme0n4: ios=3071/3072, merge=0/0, ticks=40977/35743, in_queue=76720, util=97.48% 00:11:04.678 14:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:04.678 14:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1290292 00:11:04.678 14:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:04.678 14:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:04.678 [global] 00:11:04.678 thread=1 00:11:04.678 invalidate=1 00:11:04.678 rw=read 00:11:04.678 time_based=1 00:11:04.678 runtime=10 00:11:04.678 ioengine=libaio 00:11:04.678 direct=1 00:11:04.678 bs=4096 00:11:04.678 iodepth=1 00:11:04.678 norandommap=1 00:11:04.678 numjobs=1 00:11:04.678 00:11:04.678 [job0] 00:11:04.678 filename=/dev/nvme0n1 00:11:04.678 [job1] 00:11:04.678 filename=/dev/nvme0n2 00:11:04.678 [job2] 00:11:04.678 filename=/dev/nvme0n3 00:11:04.678 [job3] 00:11:04.678 filename=/dev/nvme0n4 00:11:04.678 Could not set queue depth (nvme0n1) 00:11:04.678 Could not set queue depth (nvme0n2) 00:11:04.678 Could not set queue depth (nvme0n3) 00:11:04.678 Could not set queue depth (nvme0n4) 00:11:04.678 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.678 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.678 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.678 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.678 fio-3.35 00:11:04.678 Starting 4 threads 00:11:07.958 14:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:07.958 14:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:07.958 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:11:07.958 fio: pid=1290383, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.958 14:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.958 14:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:07.958 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35446784, buflen=4096 00:11:07.958 fio: pid=1290382, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.524 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.524 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:08.524 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=405504, buflen=4096 00:11:08.524 fio: pid=1290380, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.782 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=540672, buflen=4096 00:11:08.782 fio: pid=1290381, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.782 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.782 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:08.782 00:11:08.782 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1290380: Sat Nov 2 14:27:00 2024 00:11:08.782 read: IOPS=28, BW=112KiB/s (115kB/s)(396KiB/3536msec) 00:11:08.782 slat (usec): min=6, max=6832, avg=108.03, stdev=703.85 00:11:08.782 clat (usec): min=315, max=42095, avg=35317.06, stdev=14139.08 00:11:08.782 lat (usec): min=330, max=48928, avg=35426.02, stdev=14200.14 00:11:08.782 clat percentiles (usec): 00:11:08.782 | 1.00th=[ 314], 5.00th=[ 367], 10.00th=[ 494], 20.00th=[41157], 00:11:08.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.782 | 99.99th=[42206] 00:11:08.782 bw ( KiB/s): min= 96, max= 104, per=1.04%, avg=98.67, stdev= 4.13, samples=6 00:11:08.782 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:11:08.782 lat (usec) : 500=10.00%, 750=3.00% 00:11:08.782 lat (msec) : 4=1.00%, 50=85.00% 00:11:08.782 cpu : usr=0.11%, sys=0.00%, ctx=102, majf=0, minf=1 00:11:08.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.782 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1290381: Sat Nov 2 14:27:00 2024 00:11:08.782 read: IOPS=34, BW=139KiB/s (142kB/s)(528KiB/3805msec) 00:11:08.782 slat (usec): min=9, max=9841, avg=215.33, stdev=1279.74 00:11:08.782 clat (usec): min=352, max=42301, avg=28485.18, stdev=18872.25 00:11:08.782 lat (usec): min=370, max=52021, avg=28702.02, stdev=19055.26 00:11:08.782 clat percentiles (usec): 00:11:08.782 | 1.00th=[ 379], 5.00th=[ 404], 10.00th=[ 437], 20.00th=[ 510], 00:11:08.782 | 30.00th=[ 619], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:08.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.782 | 99.99th=[42206] 00:11:08.782 bw ( KiB/s): min= 96, max= 248, per=1.48%, avg=139.43, stdev=49.93, samples=7 00:11:08.782 iops : min= 24, max= 62, avg=34.86, stdev=12.48, samples=7 00:11:08.782 lat (usec) : 500=18.05%, 750=12.03%, 1000=0.75% 00:11:08.782 lat (msec) : 50=68.42% 00:11:08.782 cpu : usr=0.18%, sys=0.00%, ctx=137, majf=0, minf=1 00:11:08.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.782 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1290382: Sat Nov 2 14:27:00 2024 00:11:08.782 read: IOPS=2697, BW=10.5MiB/s (11.0MB/s)(33.8MiB/3208msec) 00:11:08.782 slat (nsec): min=4456, max=69321, avg=17679.33, stdev=10922.32 00:11:08.782 clat (usec): min=251, max=3593, avg=346.07, stdev=69.64 00:11:08.782 lat (usec): min=258, max=3606, avg=363.75, stdev=75.09 00:11:08.782 clat percentiles (usec): 00:11:08.782 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:11:08.782 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 355], 00:11:08.782 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 461], 00:11:08.782 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 611], 00:11:08.782 | 99.99th=[ 3589] 00:11:08.782 bw ( KiB/s): min=10104, max=11816, per=100.00%, avg=10920.00, stdev=593.19, samples=6 00:11:08.782 iops : min= 2526, max= 2954, avg=2730.00, stdev=148.30, samples=6 00:11:08.782 lat (usec) : 500=97.23%, 750=2.74%, 1000=0.01% 00:11:08.782 lat (msec) : 4=0.01% 00:11:08.782 cpu : usr=2.06%, sys=5.43%, ctx=8655, majf=0, minf=2 00:11:08.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 issued rwts: total=8655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.782 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1290383: Sat Nov 2 14:27:00 2024 00:11:08.782 read: IOPS=24, BW=98.0KiB/s (100kB/s)(288KiB/2938msec) 00:11:08.782 slat (nsec): min=13422, max=47685, avg=24430.38, stdev=9210.66 00:11:08.782 clat (usec): min=540, max=42000, avg=40451.12, stdev=4774.28 00:11:08.782 lat (usec): min=570, max=42015, avg=40475.70, stdev=4773.59 00:11:08.782 clat percentiles (usec): 00:11:08.782 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:08.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.782 | 99.99th=[42206] 00:11:08.782 bw ( KiB/s): min= 96, max= 104, per=1.05%, avg=99.20, stdev= 4.38, samples=5 00:11:08.782 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:08.782 lat (usec) : 750=1.37% 00:11:08.782 lat (msec) : 50=97.26% 00:11:08.782 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=2 00:11:08.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.782 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.782 00:11:08.782 Run status group 0 (all jobs): 00:11:08.782 READ: bw=9416KiB/s (9642kB/s), 98.0KiB/s-10.5MiB/s (100kB/s-11.0MB/s), io=35.0MiB (36.7MB), run=2938-3805msec 00:11:08.782 00:11:08.782 Disk stats (read/write): 00:11:08.782 nvme0n1: ios=95/0, merge=0/0, ticks=3335/0, in_queue=3335, util=95.85% 00:11:08.782 nvme0n2: ios=166/0, merge=0/0, ticks=4658/0, in_queue=4658, util=99.20% 00:11:08.782 nvme0n3: ios=8453/0, merge=0/0, ticks=2827/0, in_queue=2827, util=96.79% 00:11:08.782 nvme0n4: ios=70/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.75% 00:11:09.040 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.040 14:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:09.298 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.298 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:09.556 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.556 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:09.815 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.815 14:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:10.072 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:10.072 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1290292 00:11:10.072 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:10.072 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:10.330 nvmf hotplug test: fio failed as expected 00:11:10.330 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.589 rmmod nvme_tcp 00:11:10.589 rmmod nvme_fabrics 00:11:10.589 rmmod nvme_keyring 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 1288244 ']' 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 1288244 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1288244 ']' 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1288244 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288244 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288244' 00:11:10.589 killing process with pid 1288244 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1288244 00:11:10.589 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1288244 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.848 14:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.380 00:11:13.380 real 0m24.394s 00:11:13.380 user 1m25.527s 00:11:13.380 sys 0m6.553s 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.380 ************************************ 00:11:13.380 END TEST nvmf_fio_target 00:11:13.380 ************************************ 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.380 ************************************ 00:11:13.380 START TEST nvmf_bdevio 00:11:13.380 ************************************ 00:11:13.380 14:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.380 * Looking for test storage... 00:11:13.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.380 --rc genhtml_branch_coverage=1 00:11:13.380 --rc genhtml_function_coverage=1 00:11:13.380 --rc genhtml_legend=1 00:11:13.380 --rc geninfo_all_blocks=1 00:11:13.380 --rc geninfo_unexecuted_blocks=1 00:11:13.380 00:11:13.380 ' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.380 --rc genhtml_branch_coverage=1 00:11:13.380 --rc genhtml_function_coverage=1 00:11:13.380 --rc genhtml_legend=1 00:11:13.380 --rc geninfo_all_blocks=1 00:11:13.380 --rc geninfo_unexecuted_blocks=1 00:11:13.380 00:11:13.380 ' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.380 --rc genhtml_branch_coverage=1 00:11:13.380 --rc genhtml_function_coverage=1 00:11:13.380 --rc genhtml_legend=1 00:11:13.380 --rc geninfo_all_blocks=1 00:11:13.380 --rc geninfo_unexecuted_blocks=1 00:11:13.380 00:11:13.380 ' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.380 --rc genhtml_branch_coverage=1 00:11:13.380 --rc genhtml_function_coverage=1 00:11:13.380 --rc genhtml_legend=1 00:11:13.380 --rc geninfo_all_blocks=1 00:11:13.380 --rc geninfo_unexecuted_blocks=1 00:11:13.380 00:11:13.380 ' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.380 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.381 14:27:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.283 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:15.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:15.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:15.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:15.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:11:15.284 00:11:15.284 --- 10.0.0.2 ping statistics --- 00:11:15.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.284 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:15.284 00:11:15.284 --- 10.0.0.1 ping statistics --- 00:11:15.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.284 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:15.284 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=1293393 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 1293393 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1293393 ']' 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.285 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 [2024-11-02 14:27:07.326547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:15.285 [2024-11-02 14:27:07.326627] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.543 [2024-11-02 14:27:07.395918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.543 [2024-11-02 14:27:07.493529] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.543 [2024-11-02 14:27:07.493600] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.543 [2024-11-02 14:27:07.493629] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.543 [2024-11-02 14:27:07.493642] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.543 [2024-11-02 14:27:07.493652] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.543 [2024-11-02 14:27:07.493747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:15.543 [2024-11-02 14:27:07.493816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:15.543 [2024-11-02 14:27:07.493818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.543 [2024-11-02 14:27:07.493784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.801 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 [2024-11-02 14:27:07.657753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 Malloc0 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 [2024-11-02 14:27:07.711359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:15.802 { 00:11:15.802 "params": { 00:11:15.802 "name": "Nvme$subsystem", 00:11:15.802 "trtype": "$TEST_TRANSPORT", 00:11:15.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.802 "adrfam": "ipv4", 00:11:15.802 "trsvcid": "$NVMF_PORT", 00:11:15.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.802 "hdgst": ${hdgst:-false}, 00:11:15.802 "ddgst": ${ddgst:-false} 00:11:15.802 }, 00:11:15.802 "method": "bdev_nvme_attach_controller" 00:11:15.802 } 00:11:15.802 EOF 00:11:15.802 )") 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:15.802 14:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:15.802 "params": { 00:11:15.802 "name": "Nvme1", 00:11:15.802 "trtype": "tcp", 00:11:15.802 "traddr": "10.0.0.2", 00:11:15.802 "adrfam": "ipv4", 00:11:15.802 "trsvcid": "4420", 00:11:15.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.802 "hdgst": false, 00:11:15.802 "ddgst": false 00:11:15.802 }, 00:11:15.802 "method": "bdev_nvme_attach_controller" 00:11:15.802 }' 00:11:15.802 [2024-11-02 14:27:07.758484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:15.802 [2024-11-02 14:27:07.758586] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293524 ] 00:11:15.802 [2024-11-02 14:27:07.819653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.060 [2024-11-02 14:27:07.912474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.060 [2024-11-02 14:27:07.912523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.060 [2024-11-02 14:27:07.912526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.317 I/O targets: 00:11:16.317 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:16.317 00:11:16.317 00:11:16.317 CUnit - A unit testing framework for C - Version 2.1-3 00:11:16.317 http://cunit.sourceforge.net/ 00:11:16.317 00:11:16.317 00:11:16.317 Suite: bdevio tests on: Nvme1n1 00:11:16.317 Test: blockdev write read block ...passed 00:11:16.317 Test: blockdev write zeroes read block ...passed 00:11:16.317 Test: blockdev write zeroes read no split ...passed 00:11:16.317 Test: blockdev write zeroes read split ...passed 00:11:16.575 Test: blockdev write zeroes read split partial ...passed 00:11:16.575 Test: blockdev reset ...[2024-11-02 14:27:08.420190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:16.575 [2024-11-02 14:27:08.420321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1213e90 (9): Bad file descriptor 00:11:16.575 [2024-11-02 14:27:08.564079] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:16.575 passed 00:11:16.575 Test: blockdev write read 8 blocks ...passed 00:11:16.575 Test: blockdev write read size > 128k ...passed 00:11:16.575 Test: blockdev write read invalid size ...passed 00:11:16.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:16.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:16.833 Test: blockdev write read max offset ...passed 00:11:16.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:16.833 Test: blockdev writev readv 8 blocks ...passed 00:11:16.833 Test: blockdev writev readv 30 x 1block ...passed 00:11:16.833 Test: blockdev writev readv block ...passed 00:11:16.833 Test: blockdev writev readv size > 128k ...passed 00:11:16.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:16.833 Test: blockdev comparev and writev ...[2024-11-02 14:27:08.864711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.864748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.864789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.865167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.865190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.865227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.865604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.865628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.865650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.865666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.866042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.866074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:16.833 [2024-11-02 14:27:08.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.833 [2024-11-02 14:27:08.866111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:17.092 passed 00:11:17.092 Test: blockdev nvme passthru rw ...passed 00:11:17.092 Test: blockdev nvme passthru vendor specific ...[2024-11-02 14:27:08.949700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.092 [2024-11-02 14:27:08.949773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:17.092 [2024-11-02 14:27:08.950001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.092 [2024-11-02 14:27:08.950025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:17.092 [2024-11-02 14:27:08.950201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.092 [2024-11-02 14:27:08.950224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:17.092 [2024-11-02 14:27:08.950412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.092 [2024-11-02 14:27:08.950435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:17.092 passed 00:11:17.092 Test: blockdev nvme admin passthru ...passed 00:11:17.092 Test: blockdev copy ...passed 00:11:17.092 00:11:17.092 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.092 suites 1 1 n/a 0 0 00:11:17.092 tests 23 23 23 0 0 00:11:17.092 asserts 152 152 152 0 n/a 00:11:17.092 00:11:17.092 Elapsed time = 1.576 seconds 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.350 rmmod nvme_tcp 00:11:17.350 rmmod nvme_fabrics 00:11:17.350 rmmod nvme_keyring 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 1293393 ']' 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 1293393 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1293393 ']' 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1293393 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1293393 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1293393' 00:11:17.350 killing process with pid 1293393 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1293393 00:11:17.350 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1293393 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.608 14:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.143 00:11:20.143 real 0m6.640s 00:11:20.143 user 0m11.728s 00:11:20.143 sys 0m2.084s 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.143 ************************************ 00:11:20.143 END TEST nvmf_bdevio 00:11:20.143 ************************************ 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:20.143 00:11:20.143 real 3m56.619s 00:11:20.143 user 10m20.805s 00:11:20.143 sys 1m6.345s 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.143 ************************************ 00:11:20.143 END TEST nvmf_target_core 00:11:20.143 ************************************ 00:11:20.143 14:27:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:20.143 14:27:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:20.143 14:27:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.143 14:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.143 ************************************ 00:11:20.143 START TEST nvmf_target_extra 00:11:20.143 ************************************ 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:20.143 * Looking for test storage... 00:11:20.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:20.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.143 --rc genhtml_branch_coverage=1 00:11:20.143 --rc genhtml_function_coverage=1 00:11:20.143 --rc genhtml_legend=1 00:11:20.143 --rc geninfo_all_blocks=1 00:11:20.143 --rc geninfo_unexecuted_blocks=1 00:11:20.143 00:11:20.143 ' 00:11:20.143 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:20.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.143 --rc genhtml_branch_coverage=1 00:11:20.143 --rc genhtml_function_coverage=1 00:11:20.143 --rc genhtml_legend=1 00:11:20.143 --rc geninfo_all_blocks=1 00:11:20.144 --rc geninfo_unexecuted_blocks=1 00:11:20.144 00:11:20.144 ' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.144 --rc genhtml_branch_coverage=1 00:11:20.144 --rc genhtml_function_coverage=1 00:11:20.144 --rc genhtml_legend=1 00:11:20.144 --rc geninfo_all_blocks=1 00:11:20.144 --rc geninfo_unexecuted_blocks=1 00:11:20.144 00:11:20.144 ' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.144 --rc genhtml_branch_coverage=1 00:11:20.144 --rc genhtml_function_coverage=1 00:11:20.144 --rc genhtml_legend=1 00:11:20.144 --rc geninfo_all_blocks=1 00:11:20.144 --rc geninfo_unexecuted_blocks=1 00:11:20.144 00:11:20.144 ' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 ************************************ 00:11:20.144 START TEST nvmf_example 00:11:20.144 ************************************ 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.144 * Looking for test storage... 00:11:20.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.144 --rc genhtml_branch_coverage=1 00:11:20.144 --rc genhtml_function_coverage=1 00:11:20.144 --rc genhtml_legend=1 00:11:20.144 --rc geninfo_all_blocks=1 00:11:20.144 --rc geninfo_unexecuted_blocks=1 00:11:20.144 00:11:20.144 ' 00:11:20.144 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:20.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.145 --rc genhtml_branch_coverage=1 00:11:20.145 --rc genhtml_function_coverage=1 00:11:20.145 --rc genhtml_legend=1 00:11:20.145 --rc geninfo_all_blocks=1 00:11:20.145 --rc geninfo_unexecuted_blocks=1 00:11:20.145 00:11:20.145 ' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:20.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.145 --rc genhtml_branch_coverage=1 00:11:20.145 --rc genhtml_function_coverage=1 00:11:20.145 --rc genhtml_legend=1 00:11:20.145 --rc geninfo_all_blocks=1 00:11:20.145 --rc geninfo_unexecuted_blocks=1 00:11:20.145 00:11:20.145 ' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:20.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.145 --rc genhtml_branch_coverage=1 00:11:20.145 --rc genhtml_function_coverage=1 00:11:20.145 --rc genhtml_legend=1 00:11:20.145 --rc geninfo_all_blocks=1 00:11:20.145 --rc geninfo_unexecuted_blocks=1 00:11:20.145 00:11:20.145 ' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.145 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.145 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.132 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:22.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:22.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:22.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:22.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.133 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:11:22.394 00:11:22.394 --- 10.0.0.2 ping statistics --- 00:11:22.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.394 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:22.394 00:11:22.394 --- 10.0.0.1 ping statistics --- 00:11:22.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.394 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1296047 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1296047 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1296047 ']' 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.394 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.395 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.395 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.653 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.654 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:22.654 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:34.853 Initializing NVMe Controllers 00:11:34.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.853 Initialization complete. Launching workers. 00:11:34.853 ======================================================== 00:11:34.853 Latency(us) 00:11:34.853 Device Information : IOPS MiB/s Average min max 00:11:34.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14665.70 57.29 4363.79 892.59 18035.46 00:11:34.853 ======================================================== 00:11:34.853 Total : 14665.70 57.29 4363.79 892.59 18035.46 00:11:34.853 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.853 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.853 rmmod nvme_tcp 00:11:34.853 rmmod nvme_fabrics 00:11:34.853 rmmod nvme_keyring 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 1296047 ']' 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 1296047 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1296047 ']' 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1296047 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:34.853 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296047 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296047' 00:11:34.854 killing process with pid 1296047 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1296047 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1296047 00:11:34.854 nvmf threads initialize successfully 00:11:34.854 bdev subsystem init successfully 00:11:34.854 created a nvmf target service 00:11:34.854 create targets's poll groups done 00:11:34.854 all subsystems of target started 00:11:34.854 nvmf target is running 00:11:34.854 all subsystems of target stopped 00:11:34.854 destroy targets's poll groups done 00:11:34.854 destroyed the nvmf target service 00:11:34.854 bdev subsystem finish successfully 00:11:34.854 nvmf threads destroy successfully 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.854 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.422 00:11:35.422 real 0m15.519s 00:11:35.422 user 0m42.536s 00:11:35.422 sys 0m3.456s 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.422 ************************************ 00:11:35.422 END TEST nvmf_example 00:11:35.422 ************************************ 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.422 ************************************ 00:11:35.422 START TEST nvmf_filesystem 00:11:35.422 ************************************ 00:11:35.422 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:35.422 * Looking for test storage... 00:11:35.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.684 --rc genhtml_branch_coverage=1 00:11:35.684 --rc genhtml_function_coverage=1 00:11:35.684 --rc genhtml_legend=1 00:11:35.684 --rc geninfo_all_blocks=1 00:11:35.684 --rc geninfo_unexecuted_blocks=1 00:11:35.684 00:11:35.684 ' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.684 --rc genhtml_branch_coverage=1 00:11:35.684 --rc genhtml_function_coverage=1 00:11:35.684 --rc genhtml_legend=1 00:11:35.684 --rc geninfo_all_blocks=1 00:11:35.684 --rc geninfo_unexecuted_blocks=1 00:11:35.684 00:11:35.684 ' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.684 --rc genhtml_branch_coverage=1 00:11:35.684 --rc genhtml_function_coverage=1 00:11:35.684 --rc genhtml_legend=1 00:11:35.684 --rc geninfo_all_blocks=1 00:11:35.684 --rc geninfo_unexecuted_blocks=1 00:11:35.684 00:11:35.684 ' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.684 --rc genhtml_branch_coverage=1 00:11:35.684 --rc genhtml_function_coverage=1 00:11:35.684 --rc genhtml_legend=1 00:11:35.684 --rc geninfo_all_blocks=1 00:11:35.684 --rc geninfo_unexecuted_blocks=1 00:11:35.684 00:11:35.684 ' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:35.684 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:35.685 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:35.685 #define SPDK_CONFIG_H 00:11:35.685 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:35.685 #define SPDK_CONFIG_APPS 1 00:11:35.685 #define SPDK_CONFIG_ARCH native 00:11:35.685 #undef SPDK_CONFIG_ASAN 00:11:35.685 #undef SPDK_CONFIG_AVAHI 00:11:35.685 #undef SPDK_CONFIG_CET 00:11:35.685 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:35.685 #define SPDK_CONFIG_COVERAGE 1 00:11:35.685 #define SPDK_CONFIG_CROSS_PREFIX 00:11:35.685 #undef SPDK_CONFIG_CRYPTO 00:11:35.685 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:35.685 #undef SPDK_CONFIG_CUSTOMOCF 00:11:35.685 #undef SPDK_CONFIG_DAOS 00:11:35.685 #define SPDK_CONFIG_DAOS_DIR 00:11:35.685 #define SPDK_CONFIG_DEBUG 1 00:11:35.685 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:35.685 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.685 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:35.685 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.685 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:35.685 #undef SPDK_CONFIG_DPDK_UADK 00:11:35.685 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:35.685 #define SPDK_CONFIG_EXAMPLES 1 00:11:35.685 #undef SPDK_CONFIG_FC 00:11:35.685 #define SPDK_CONFIG_FC_PATH 00:11:35.685 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:35.685 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:35.685 #define SPDK_CONFIG_FSDEV 1 00:11:35.685 #undef SPDK_CONFIG_FUSE 00:11:35.685 #undef SPDK_CONFIG_FUZZER 00:11:35.685 #define SPDK_CONFIG_FUZZER_LIB 00:11:35.685 #undef SPDK_CONFIG_GOLANG 00:11:35.685 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:35.685 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:35.685 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:35.685 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:35.685 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:35.685 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:35.685 #undef SPDK_CONFIG_HAVE_LZ4 00:11:35.685 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:35.685 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:35.685 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:35.685 #define SPDK_CONFIG_IDXD 1 00:11:35.685 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:35.685 #undef SPDK_CONFIG_IPSEC_MB 00:11:35.685 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:35.685 #define SPDK_CONFIG_ISAL 1 00:11:35.685 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:35.685 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:35.685 #define SPDK_CONFIG_LIBDIR 00:11:35.685 #undef SPDK_CONFIG_LTO 00:11:35.685 #define SPDK_CONFIG_MAX_LCORES 128 00:11:35.685 #define SPDK_CONFIG_NVME_CUSE 1 00:11:35.685 #undef SPDK_CONFIG_OCF 00:11:35.685 #define SPDK_CONFIG_OCF_PATH 00:11:35.685 #define SPDK_CONFIG_OPENSSL_PATH 00:11:35.685 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:35.685 #define SPDK_CONFIG_PGO_DIR 00:11:35.685 #undef SPDK_CONFIG_PGO_USE 00:11:35.685 #define SPDK_CONFIG_PREFIX /usr/local 00:11:35.685 #undef SPDK_CONFIG_RAID5F 00:11:35.685 #undef SPDK_CONFIG_RBD 00:11:35.685 #define SPDK_CONFIG_RDMA 1 00:11:35.685 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:35.685 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:35.685 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:35.685 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:35.685 #define SPDK_CONFIG_SHARED 1 00:11:35.685 #undef SPDK_CONFIG_SMA 00:11:35.685 #define SPDK_CONFIG_TESTS 1 00:11:35.685 #undef SPDK_CONFIG_TSAN 00:11:35.685 #define SPDK_CONFIG_UBLK 1 00:11:35.685 #define SPDK_CONFIG_UBSAN 1 00:11:35.685 #undef SPDK_CONFIG_UNIT_TESTS 00:11:35.685 #undef SPDK_CONFIG_URING 00:11:35.685 #define SPDK_CONFIG_URING_PATH 00:11:35.685 #undef SPDK_CONFIG_URING_ZNS 00:11:35.685 #undef SPDK_CONFIG_USDT 00:11:35.685 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:35.685 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:35.685 #define SPDK_CONFIG_VFIO_USER 1 00:11:35.685 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:35.685 #define SPDK_CONFIG_VHOST 1 00:11:35.685 #define SPDK_CONFIG_VIRTIO 1 00:11:35.685 #undef SPDK_CONFIG_VTUNE 00:11:35.685 #define SPDK_CONFIG_VTUNE_DIR 00:11:35.685 #define SPDK_CONFIG_WERROR 1 00:11:35.685 #define SPDK_CONFIG_WPDK_DIR 00:11:35.685 #undef SPDK_CONFIG_XNVME 00:11:35.685 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:35.686 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.687 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1297622 ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1297622 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.pk7ird 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pk7ird/tests/target /tmp/spdk.pk7ird 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:35.688 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=52560207872 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=9428320256 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30992625664 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1638400 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:35.689 * Looking for test storage... 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=52560207872 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=11642912768 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:35.689 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:35.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.949 --rc genhtml_branch_coverage=1 00:11:35.949 --rc genhtml_function_coverage=1 00:11:35.949 --rc genhtml_legend=1 00:11:35.949 --rc geninfo_all_blocks=1 00:11:35.949 --rc geninfo_unexecuted_blocks=1 00:11:35.949 00:11:35.949 ' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:35.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.949 --rc genhtml_branch_coverage=1 00:11:35.949 --rc genhtml_function_coverage=1 00:11:35.949 --rc genhtml_legend=1 00:11:35.949 --rc geninfo_all_blocks=1 00:11:35.949 --rc geninfo_unexecuted_blocks=1 00:11:35.949 00:11:35.949 ' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:35.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.949 --rc genhtml_branch_coverage=1 00:11:35.949 --rc genhtml_function_coverage=1 00:11:35.949 --rc genhtml_legend=1 00:11:35.949 --rc geninfo_all_blocks=1 00:11:35.949 --rc geninfo_unexecuted_blocks=1 00:11:35.949 00:11:35.949 ' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:35.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.949 --rc genhtml_branch_coverage=1 00:11:35.949 --rc genhtml_function_coverage=1 00:11:35.949 --rc genhtml_legend=1 00:11:35.949 --rc geninfo_all_blocks=1 00:11:35.949 --rc geninfo_unexecuted_blocks=1 00:11:35.949 00:11:35.949 ' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.949 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.950 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:37.851 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:37.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:37.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:37.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:37.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.852 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:11:38.111 00:11:38.111 --- 10.0.0.2 ping statistics --- 00:11:38.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.111 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:38.111 00:11:38.111 --- 10.0.0.1 ping statistics --- 00:11:38.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.111 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.111 ************************************ 00:11:38.111 START TEST nvmf_filesystem_no_in_capsule 00:11:38.111 ************************************ 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=1299384 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 1299384 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1299384 ']' 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.111 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.111 [2024-11-02 14:27:30.029747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:38.111 [2024-11-02 14:27:30.029854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.111 [2024-11-02 14:27:30.115553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.370 [2024-11-02 14:27:30.210417] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.370 [2024-11-02 14:27:30.210482] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.370 [2024-11-02 14:27:30.210512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.370 [2024-11-02 14:27:30.210523] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.370 [2024-11-02 14:27:30.210533] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.370 [2024-11-02 14:27:30.210595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.370 [2024-11-02 14:27:30.210621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.370 [2024-11-02 14:27:30.210680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.370 [2024-11-02 14:27:30.210682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.370 [2024-11-02 14:27:30.372895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.370 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 Malloc1 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 [2024-11-02 14:27:30.553147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:38.629 { 00:11:38.629 "name": "Malloc1", 00:11:38.629 "aliases": [ 00:11:38.629 "64bfb17f-fc2a-4dc3-854b-fbe60c1f5838" 00:11:38.629 ], 00:11:38.629 "product_name": "Malloc disk", 00:11:38.629 "block_size": 512, 00:11:38.629 "num_blocks": 1048576, 00:11:38.629 "uuid": "64bfb17f-fc2a-4dc3-854b-fbe60c1f5838", 00:11:38.629 "assigned_rate_limits": { 00:11:38.629 "rw_ios_per_sec": 0, 00:11:38.629 "rw_mbytes_per_sec": 0, 00:11:38.629 "r_mbytes_per_sec": 0, 00:11:38.629 "w_mbytes_per_sec": 0 00:11:38.629 }, 00:11:38.629 "claimed": true, 00:11:38.629 "claim_type": "exclusive_write", 00:11:38.629 "zoned": false, 00:11:38.629 "supported_io_types": { 00:11:38.629 "read": true, 00:11:38.629 "write": true, 00:11:38.629 "unmap": true, 00:11:38.629 "flush": true, 00:11:38.629 "reset": true, 00:11:38.629 "nvme_admin": false, 00:11:38.629 "nvme_io": false, 00:11:38.629 "nvme_io_md": false, 00:11:38.629 "write_zeroes": true, 00:11:38.629 "zcopy": true, 00:11:38.629 "get_zone_info": false, 00:11:38.629 "zone_management": false, 00:11:38.629 "zone_append": false, 00:11:38.629 "compare": false, 00:11:38.629 "compare_and_write": false, 00:11:38.629 "abort": true, 00:11:38.629 "seek_hole": false, 00:11:38.629 "seek_data": false, 00:11:38.629 "copy": true, 00:11:38.629 "nvme_iov_md": false 00:11:38.629 }, 00:11:38.629 "memory_domains": [ 00:11:38.629 { 00:11:38.629 "dma_device_id": "system", 00:11:38.629 "dma_device_type": 1 00:11:38.629 }, 00:11:38.629 { 00:11:38.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.629 "dma_device_type": 2 00:11:38.629 } 00:11:38.629 ], 00:11:38.629 "driver_specific": {} 00:11:38.629 } 00:11:38.629 ]' 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:38.629 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.561 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.561 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.561 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.561 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:39.561 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:41.459 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:41.459 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:41.459 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.459 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:41.460 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:41.717 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:42.650 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.584 ************************************ 00:11:43.584 START TEST filesystem_ext4 00:11:43.584 ************************************ 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:43.584 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.584 mke2fs 1.47.0 (5-Feb-2023) 00:11:43.584 Discarding device blocks: 0/522240 done 00:11:43.842 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.842 Filesystem UUID: 3748b478-5ab0-48a9-9cc5-feb6e7007f64 00:11:43.842 Superblock backups stored on blocks: 00:11:43.842 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.842 00:11:43.842 Allocating group tables: 0/64 done 00:11:43.842 Writing inode tables: 0/64 done 00:11:46.368 Creating journal (8192 blocks): done 00:11:46.368 Writing superblocks and filesystem accounting information: 0/64 done 00:11:46.368 00:11:46.368 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:46.368 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1299384 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.928 00:11:52.928 real 0m9.045s 00:11:52.928 user 0m0.022s 00:11:52.928 sys 0m0.063s 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:52.928 ************************************ 00:11:52.928 END TEST filesystem_ext4 00:11:52.928 ************************************ 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.928 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.929 ************************************ 00:11:52.929 START TEST filesystem_btrfs 00:11:52.929 ************************************ 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:52.929 btrfs-progs v6.8.1 00:11:52.929 See https://btrfs.readthedocs.io for more information. 00:11:52.929 00:11:52.929 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:52.929 NOTE: several default settings have changed in version 5.15, please make sure 00:11:52.929 this does not affect your deployments: 00:11:52.929 - DUP for metadata (-m dup) 00:11:52.929 - enabled no-holes (-O no-holes) 00:11:52.929 - enabled free-space-tree (-R free-space-tree) 00:11:52.929 00:11:52.929 Label: (null) 00:11:52.929 UUID: f4efd914-ee6e-4ff1-b1a7-3aca69cf45d7 00:11:52.929 Node size: 16384 00:11:52.929 Sector size: 4096 (CPU page size: 4096) 00:11:52.929 Filesystem size: 510.00MiB 00:11:52.929 Block group profiles: 00:11:52.929 Data: single 8.00MiB 00:11:52.929 Metadata: DUP 32.00MiB 00:11:52.929 System: DUP 8.00MiB 00:11:52.929 SSD detected: yes 00:11:52.929 Zoned device: no 00:11:52.929 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:52.929 Checksum: crc32c 00:11:52.929 Number of devices: 1 00:11:52.929 Devices: 00:11:52.929 ID SIZE PATH 00:11:52.929 1 510.00MiB /dev/nvme0n1p1 00:11:52.929 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:52.929 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.494 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.494 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.494 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1299384 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.495 00:11:53.495 real 0m0.872s 00:11:53.495 user 0m0.020s 00:11:53.495 sys 0m0.100s 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.495 ************************************ 00:11:53.495 END TEST filesystem_btrfs 00:11:53.495 ************************************ 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.495 ************************************ 00:11:53.495 START TEST filesystem_xfs 00:11:53.495 ************************************ 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:53.495 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.753 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.753 = sectsz=512 attr=2, projid32bit=1 00:11:53.753 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.753 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.753 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.753 = sunit=0 swidth=0 blks 00:11:53.753 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.753 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.753 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.753 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.685 Discarding blocks...Done. 00:11:54.685 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:54.685 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1299384 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.215 00:11:57.215 real 0m3.424s 00:11:57.215 user 0m0.014s 00:11:57.215 sys 0m0.066s 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.215 ************************************ 00:11:57.215 END TEST filesystem_xfs 00:11:57.215 ************************************ 00:11:57.215 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:57.215 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1299384 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1299384 ']' 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1299384 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1299384 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1299384' 00:11:57.474 killing process with pid 1299384 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1299384 00:11:57.474 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1299384 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.051 00:11:58.051 real 0m19.888s 00:11:58.051 user 1m16.947s 00:11:58.051 sys 0m2.320s 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.051 ************************************ 00:11:58.051 END TEST nvmf_filesystem_no_in_capsule 00:11:58.051 ************************************ 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.051 ************************************ 00:11:58.051 START TEST nvmf_filesystem_in_capsule 00:11:58.051 ************************************ 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=1301894 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 1301894 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1301894 ']' 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.051 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.051 [2024-11-02 14:27:49.969591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:58.051 [2024-11-02 14:27:49.969686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.051 [2024-11-02 14:27:50.043888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.309 [2024-11-02 14:27:50.140627] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.309 [2024-11-02 14:27:50.140691] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.309 [2024-11-02 14:27:50.140709] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.309 [2024-11-02 14:27:50.140723] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.309 [2024-11-02 14:27:50.140735] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.309 [2024-11-02 14:27:50.140793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.309 [2024-11-02 14:27:50.140864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.309 [2024-11-02 14:27:50.140912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.309 [2024-11-02 14:27:50.140915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.309 [2024-11-02 14:27:50.292008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.309 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.568 Malloc1 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.568 [2024-11-02 14:27:50.471882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:58.568 { 00:11:58.568 "name": "Malloc1", 00:11:58.568 "aliases": [ 00:11:58.568 "4ccffdc8-2d26-41ee-a37e-3fa4966d0212" 00:11:58.568 ], 00:11:58.568 "product_name": "Malloc disk", 00:11:58.568 "block_size": 512, 00:11:58.568 "num_blocks": 1048576, 00:11:58.568 "uuid": "4ccffdc8-2d26-41ee-a37e-3fa4966d0212", 00:11:58.568 "assigned_rate_limits": { 00:11:58.568 "rw_ios_per_sec": 0, 00:11:58.568 "rw_mbytes_per_sec": 0, 00:11:58.568 "r_mbytes_per_sec": 0, 00:11:58.568 "w_mbytes_per_sec": 0 00:11:58.568 }, 00:11:58.568 "claimed": true, 00:11:58.568 "claim_type": "exclusive_write", 00:11:58.568 "zoned": false, 00:11:58.568 "supported_io_types": { 00:11:58.568 "read": true, 00:11:58.568 "write": true, 00:11:58.568 "unmap": true, 00:11:58.568 "flush": true, 00:11:58.568 "reset": true, 00:11:58.568 "nvme_admin": false, 00:11:58.568 "nvme_io": false, 00:11:58.568 "nvme_io_md": false, 00:11:58.568 "write_zeroes": true, 00:11:58.568 "zcopy": true, 00:11:58.568 "get_zone_info": false, 00:11:58.568 "zone_management": false, 00:11:58.568 "zone_append": false, 00:11:58.568 "compare": false, 00:11:58.568 "compare_and_write": false, 00:11:58.568 "abort": true, 00:11:58.568 "seek_hole": false, 00:11:58.568 "seek_data": false, 00:11:58.568 "copy": true, 00:11:58.568 "nvme_iov_md": false 00:11:58.568 }, 00:11:58.568 "memory_domains": [ 00:11:58.568 { 00:11:58.568 "dma_device_id": "system", 00:11:58.568 "dma_device_type": 1 00:11:58.568 }, 00:11:58.568 { 00:11:58.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.568 "dma_device_type": 2 00:11:58.568 } 00:11:58.568 ], 00:11:58.568 "driver_specific": {} 00:11:58.568 } 00:11:58.568 ]' 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:58.568 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.509 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.509 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.509 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.509 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.509 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.421 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.421 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.421 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.421 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.421 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.422 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:01.681 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:02.249 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.187 ************************************ 00:12:03.187 START TEST filesystem_in_capsule_ext4 00:12:03.187 ************************************ 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:03.187 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:03.187 mke2fs 1.47.0 (5-Feb-2023) 00:12:03.187 Discarding device blocks: 0/522240 done 00:12:03.187 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:03.187 Filesystem UUID: 6a9dcec1-2679-431f-a9fd-18701ea53760 00:12:03.187 Superblock backups stored on blocks: 00:12:03.187 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:03.187 00:12:03.187 Allocating group tables: 0/64 done 00:12:03.187 Writing inode tables: 0/64 done 00:12:03.448 Creating journal (8192 blocks): done 00:12:04.957 Writing superblocks and filesystem accounting information: 0/64 done 00:12:04.957 00:12:04.957 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:04.957 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1301894 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.550 00:12:11.550 real 0m7.552s 00:12:11.550 user 0m0.018s 00:12:11.550 sys 0m0.073s 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:11.550 ************************************ 00:12:11.550 END TEST filesystem_in_capsule_ext4 00:12:11.550 ************************************ 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.550 ************************************ 00:12:11.550 START TEST filesystem_in_capsule_btrfs 00:12:11.550 ************************************ 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.550 btrfs-progs v6.8.1 00:12:11.550 See https://btrfs.readthedocs.io for more information. 00:12:11.550 00:12:11.550 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.550 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.550 this does not affect your deployments: 00:12:11.550 - DUP for metadata (-m dup) 00:12:11.550 - enabled no-holes (-O no-holes) 00:12:11.550 - enabled free-space-tree (-R free-space-tree) 00:12:11.550 00:12:11.550 Label: (null) 00:12:11.550 UUID: e5034a4d-185e-43b2-bee1-14a3bd0fe328 00:12:11.550 Node size: 16384 00:12:11.550 Sector size: 4096 (CPU page size: 4096) 00:12:11.550 Filesystem size: 510.00MiB 00:12:11.550 Block group profiles: 00:12:11.550 Data: single 8.00MiB 00:12:11.550 Metadata: DUP 32.00MiB 00:12:11.550 System: DUP 8.00MiB 00:12:11.550 SSD detected: yes 00:12:11.550 Zoned device: no 00:12:11.550 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.550 Checksum: crc32c 00:12:11.550 Number of devices: 1 00:12:11.550 Devices: 00:12:11.550 ID SIZE PATH 00:12:11.550 1 510.00MiB /dev/nvme0n1p1 00:12:11.550 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:11.550 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1301894 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.550 00:12:11.550 real 0m0.613s 00:12:11.550 user 0m0.019s 00:12:11.550 sys 0m0.095s 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.550 ************************************ 00:12:11.550 END TEST filesystem_in_capsule_btrfs 00:12:11.550 ************************************ 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.550 ************************************ 00:12:11.550 START TEST filesystem_in_capsule_xfs 00:12:11.550 ************************************ 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:11.550 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:11.551 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:11.551 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:11.551 = sectsz=512 attr=2, projid32bit=1 00:12:11.551 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:11.551 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:11.551 data = bsize=4096 blocks=130560, imaxpct=25 00:12:11.551 = sunit=0 swidth=0 blks 00:12:11.551 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:11.551 log =internal log bsize=4096 blocks=16384, version=2 00:12:11.551 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:11.551 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:12.491 Discarding blocks...Done. 00:12:12.491 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:12.491 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1301894 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.103 00:12:15.103 real 0m3.379s 00:12:15.103 user 0m0.013s 00:12:15.103 sys 0m0.070s 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.103 ************************************ 00:12:15.103 END TEST filesystem_in_capsule_xfs 00:12:15.103 ************************************ 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.103 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1301894 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1301894 ']' 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1301894 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301894 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301894' 00:12:15.103 killing process with pid 1301894 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1301894 00:12:15.103 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1301894 00:12:15.673 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.673 00:12:15.673 real 0m17.545s 00:12:15.673 user 1m7.794s 00:12:15.673 sys 0m2.184s 00:12:15.673 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.673 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.673 ************************************ 00:12:15.673 END TEST nvmf_filesystem_in_capsule 00:12:15.674 ************************************ 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.674 rmmod nvme_tcp 00:12:15.674 rmmod nvme_fabrics 00:12:15.674 rmmod nvme_keyring 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.674 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.582 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.582 00:12:17.582 real 0m42.202s 00:12:17.582 user 2m25.835s 00:12:17.582 sys 0m6.164s 00:12:17.582 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.582 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.582 ************************************ 00:12:17.582 END TEST nvmf_filesystem 00:12:17.582 ************************************ 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.841 ************************************ 00:12:17.841 START TEST nvmf_target_discovery 00:12:17.841 ************************************ 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:17.841 * Looking for test storage... 00:12:17.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:17.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.841 --rc genhtml_branch_coverage=1 00:12:17.841 --rc genhtml_function_coverage=1 00:12:17.841 --rc genhtml_legend=1 00:12:17.841 --rc geninfo_all_blocks=1 00:12:17.841 --rc geninfo_unexecuted_blocks=1 00:12:17.841 00:12:17.841 ' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:17.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.841 --rc genhtml_branch_coverage=1 00:12:17.841 --rc genhtml_function_coverage=1 00:12:17.841 --rc genhtml_legend=1 00:12:17.841 --rc geninfo_all_blocks=1 00:12:17.841 --rc geninfo_unexecuted_blocks=1 00:12:17.841 00:12:17.841 ' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:17.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.841 --rc genhtml_branch_coverage=1 00:12:17.841 --rc genhtml_function_coverage=1 00:12:17.841 --rc genhtml_legend=1 00:12:17.841 --rc geninfo_all_blocks=1 00:12:17.841 --rc geninfo_unexecuted_blocks=1 00:12:17.841 00:12:17.841 ' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:17.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.841 --rc genhtml_branch_coverage=1 00:12:17.841 --rc genhtml_function_coverage=1 00:12:17.841 --rc genhtml_legend=1 00:12:17.841 --rc geninfo_all_blocks=1 00:12:17.841 --rc geninfo_unexecuted_blocks=1 00:12:17.841 00:12:17.841 ' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.841 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.842 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:20.381 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:20.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:20.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:20.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:20.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.382 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:12:20.382 00:12:20.382 --- 10.0.0.2 ping statistics --- 00:12:20.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.382 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:12:20.382 00:12:20.382 --- 10.0.0.1 ping statistics --- 00:12:20.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.382 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:20.382 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=1306159 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 1306159 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1306159 ']' 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.383 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 [2024-11-02 14:28:12.165284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:20.383 [2024-11-02 14:28:12.165367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.383 [2024-11-02 14:28:12.237371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.383 [2024-11-02 14:28:12.336212] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.383 [2024-11-02 14:28:12.336289] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.383 [2024-11-02 14:28:12.336313] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.383 [2024-11-02 14:28:12.336327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.383 [2024-11-02 14:28:12.336338] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.383 [2024-11-02 14:28:12.340281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.383 [2024-11-02 14:28:12.340320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.383 [2024-11-02 14:28:12.340388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.383 [2024-11-02 14:28:12.340392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 [2024-11-02 14:28:12.506012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 Null1 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 [2024-11-02 14:28:12.546376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 Null2 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 Null3 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 Null4 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.644 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:20.905 00:12:20.905 Discovery Log Number of Records 6, Generation counter 6 00:12:20.905 =====Discovery Log Entry 0====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: current discovery subsystem 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4420 00:12:20.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: explicit discovery connections, duplicate discovery information 00:12:20.905 sectype: none 00:12:20.905 =====Discovery Log Entry 1====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: nvme subsystem 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4420 00:12:20.905 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: none 00:12:20.905 sectype: none 00:12:20.905 =====Discovery Log Entry 2====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: nvme subsystem 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4420 00:12:20.905 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: none 00:12:20.905 sectype: none 00:12:20.905 =====Discovery Log Entry 3====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: nvme subsystem 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4420 00:12:20.905 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: none 00:12:20.905 sectype: none 00:12:20.905 =====Discovery Log Entry 4====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: nvme subsystem 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4420 00:12:20.905 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: none 00:12:20.905 sectype: none 00:12:20.905 =====Discovery Log Entry 5====== 00:12:20.905 trtype: tcp 00:12:20.905 adrfam: ipv4 00:12:20.905 subtype: discovery subsystem referral 00:12:20.905 treq: not required 00:12:20.905 portid: 0 00:12:20.905 trsvcid: 4430 00:12:20.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.905 traddr: 10.0.0.2 00:12:20.905 eflags: none 00:12:20.905 sectype: none 00:12:20.905 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:20.905 Perform nvmf subsystem discovery via RPC 00:12:20.905 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:20.905 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.905 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.905 [ 00:12:20.905 { 00:12:20.905 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.905 "subtype": "Discovery", 00:12:20.905 "listen_addresses": [ 00:12:20.905 { 00:12:20.905 "trtype": "TCP", 00:12:20.905 "adrfam": "IPv4", 00:12:20.905 "traddr": "10.0.0.2", 00:12:20.905 "trsvcid": "4420" 00:12:20.905 } 00:12:20.905 ], 00:12:20.905 "allow_any_host": true, 00:12:20.905 "hosts": [] 00:12:20.905 }, 00:12:20.905 { 00:12:20.905 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.905 "subtype": "NVMe", 00:12:20.905 "listen_addresses": [ 00:12:20.905 { 00:12:20.905 "trtype": "TCP", 00:12:20.905 "adrfam": "IPv4", 00:12:20.905 "traddr": "10.0.0.2", 00:12:20.905 "trsvcid": "4420" 00:12:20.905 } 00:12:20.905 ], 00:12:20.905 "allow_any_host": true, 00:12:20.905 "hosts": [], 00:12:20.905 "serial_number": "SPDK00000000000001", 00:12:20.905 "model_number": "SPDK bdev Controller", 00:12:20.905 "max_namespaces": 32, 00:12:20.905 "min_cntlid": 1, 00:12:20.905 "max_cntlid": 65519, 00:12:20.905 "namespaces": [ 00:12:20.905 { 00:12:20.905 "nsid": 1, 00:12:20.905 "bdev_name": "Null1", 00:12:20.905 "name": "Null1", 00:12:20.905 "nguid": "B2D19EDD49E6406BBF2F2318510DEACB", 00:12:20.905 "uuid": "b2d19edd-49e6-406b-bf2f-2318510deacb" 00:12:20.905 } 00:12:20.905 ] 00:12:20.905 }, 00:12:20.905 { 00:12:20.906 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:20.906 "subtype": "NVMe", 00:12:20.906 "listen_addresses": [ 00:12:20.906 { 00:12:20.906 "trtype": "TCP", 00:12:20.906 "adrfam": "IPv4", 00:12:20.906 "traddr": "10.0.0.2", 00:12:20.906 "trsvcid": "4420" 00:12:20.906 } 00:12:20.906 ], 00:12:20.906 "allow_any_host": true, 00:12:20.906 "hosts": [], 00:12:20.906 "serial_number": "SPDK00000000000002", 00:12:20.906 "model_number": "SPDK bdev Controller", 00:12:20.906 "max_namespaces": 32, 00:12:20.906 "min_cntlid": 1, 00:12:20.906 "max_cntlid": 65519, 00:12:20.906 "namespaces": [ 00:12:20.906 { 00:12:20.906 "nsid": 1, 00:12:20.906 "bdev_name": "Null2", 00:12:20.906 "name": "Null2", 00:12:20.906 "nguid": "C602BDA499D04E2FB7A435E6B82BA9EF", 00:12:20.906 "uuid": "c602bda4-99d0-4e2f-b7a4-35e6b82ba9ef" 00:12:20.906 } 00:12:20.906 ] 00:12:20.906 }, 00:12:20.906 { 00:12:20.906 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:20.906 "subtype": "NVMe", 00:12:20.906 "listen_addresses": [ 00:12:20.906 { 00:12:20.906 "trtype": "TCP", 00:12:20.906 "adrfam": "IPv4", 00:12:20.906 "traddr": "10.0.0.2", 00:12:20.906 "trsvcid": "4420" 00:12:20.906 } 00:12:20.906 ], 00:12:20.906 "allow_any_host": true, 00:12:20.906 "hosts": [], 00:12:20.906 "serial_number": "SPDK00000000000003", 00:12:20.906 "model_number": "SPDK bdev Controller", 00:12:20.906 "max_namespaces": 32, 00:12:20.906 "min_cntlid": 1, 00:12:20.906 "max_cntlid": 65519, 00:12:20.906 "namespaces": [ 00:12:20.906 { 00:12:20.906 "nsid": 1, 00:12:20.906 "bdev_name": "Null3", 00:12:20.906 "name": "Null3", 00:12:20.906 "nguid": "226B06130AF0462B9DF66CCC313D9437", 00:12:20.906 "uuid": "226b0613-0af0-462b-9df6-6ccc313d9437" 00:12:20.906 } 00:12:20.906 ] 00:12:20.906 }, 00:12:20.906 { 00:12:20.906 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:20.906 "subtype": "NVMe", 00:12:20.906 "listen_addresses": [ 00:12:20.906 { 00:12:20.906 "trtype": "TCP", 00:12:20.906 "adrfam": "IPv4", 00:12:20.906 "traddr": "10.0.0.2", 00:12:20.906 "trsvcid": "4420" 00:12:20.906 } 00:12:20.906 ], 00:12:20.906 "allow_any_host": true, 00:12:20.906 "hosts": [], 00:12:20.906 "serial_number": "SPDK00000000000004", 00:12:20.906 "model_number": "SPDK bdev Controller", 00:12:20.906 "max_namespaces": 32, 00:12:20.906 "min_cntlid": 1, 00:12:20.906 "max_cntlid": 65519, 00:12:20.906 "namespaces": [ 00:12:20.906 { 00:12:20.906 "nsid": 1, 00:12:20.906 "bdev_name": "Null4", 00:12:20.906 "name": "Null4", 00:12:20.906 "nguid": "51E3AF92D728440388AB256FE8BE85C3", 00:12:20.906 "uuid": "51e3af92-d728-4403-88ab-256fe8be85c3" 00:12:20.906 } 00:12:20.906 ] 00:12:20.906 } 00:12:20.906 ] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.906 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.166 rmmod nvme_tcp 00:12:21.166 rmmod nvme_fabrics 00:12:21.166 rmmod nvme_keyring 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 1306159 ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 1306159 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1306159 ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1306159 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306159 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306159' 00:12:21.166 killing process with pid 1306159 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1306159 00:12:21.166 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1306159 00:12:21.424 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:21.424 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:21.424 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.425 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.964 00:12:23.964 real 0m5.729s 00:12:23.964 user 0m4.889s 00:12:23.964 sys 0m1.925s 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.964 ************************************ 00:12:23.964 END TEST nvmf_target_discovery 00:12:23.964 ************************************ 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.964 ************************************ 00:12:23.964 START TEST nvmf_referrals 00:12:23.964 ************************************ 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.964 * Looking for test storage... 00:12:23.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.964 --rc genhtml_branch_coverage=1 00:12:23.964 --rc genhtml_function_coverage=1 00:12:23.964 --rc genhtml_legend=1 00:12:23.964 --rc geninfo_all_blocks=1 00:12:23.964 --rc geninfo_unexecuted_blocks=1 00:12:23.964 00:12:23.964 ' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.964 --rc genhtml_branch_coverage=1 00:12:23.964 --rc genhtml_function_coverage=1 00:12:23.964 --rc genhtml_legend=1 00:12:23.964 --rc geninfo_all_blocks=1 00:12:23.964 --rc geninfo_unexecuted_blocks=1 00:12:23.964 00:12:23.964 ' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.964 --rc genhtml_branch_coverage=1 00:12:23.964 --rc genhtml_function_coverage=1 00:12:23.964 --rc genhtml_legend=1 00:12:23.964 --rc geninfo_all_blocks=1 00:12:23.964 --rc geninfo_unexecuted_blocks=1 00:12:23.964 00:12:23.964 ' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.964 --rc genhtml_branch_coverage=1 00:12:23.964 --rc genhtml_function_coverage=1 00:12:23.964 --rc genhtml_legend=1 00:12:23.964 --rc geninfo_all_blocks=1 00:12:23.964 --rc geninfo_unexecuted_blocks=1 00:12:23.964 00:12:23.964 ' 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.964 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.965 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:25.870 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:25.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:25.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:25.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:25.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:25.871 00:12:25.871 --- 10.0.0.2 ping statistics --- 00:12:25.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.871 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:12:25.871 00:12:25.871 --- 10.0.0.1 ping statistics --- 00:12:25.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.871 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=1308271 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 1308271 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1308271 ']' 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.871 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.132 [2024-11-02 14:28:17.962303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:26.132 [2024-11-02 14:28:17.962394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.132 [2024-11-02 14:28:18.027763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.132 [2024-11-02 14:28:18.117180] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.132 [2024-11-02 14:28:18.117266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.132 [2024-11-02 14:28:18.117282] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.132 [2024-11-02 14:28:18.117307] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.132 [2024-11-02 14:28:18.117317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.132 [2024-11-02 14:28:18.117389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.132 [2024-11-02 14:28:18.117452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.132 [2024-11-02 14:28:18.117517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.132 [2024-11-02 14:28:18.117519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.391 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 [2024-11-02 14:28:18.281075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 [2024-11-02 14:28:18.293343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.392 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.652 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.652 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.652 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:26.652 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.652 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.653 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.912 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.913 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.170 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.171 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:27.428 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.429 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.429 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.429 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.429 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.696 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:27.696 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.697 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:27.697 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:27.697 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.697 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.697 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.959 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.959 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.959 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:27.959 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.959 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.476 rmmod nvme_tcp 00:12:28.476 rmmod nvme_fabrics 00:12:28.476 rmmod nvme_keyring 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 1308271 ']' 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 1308271 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1308271 ']' 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1308271 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1308271 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1308271' 00:12:28.476 killing process with pid 1308271 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1308271 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1308271 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.736 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.645 00:12:30.645 real 0m7.181s 00:12:30.645 user 0m11.281s 00:12:30.645 sys 0m2.399s 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.645 ************************************ 00:12:30.645 END TEST nvmf_referrals 00:12:30.645 ************************************ 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.645 ************************************ 00:12:30.645 START TEST nvmf_connect_disconnect 00:12:30.645 ************************************ 00:12:30.645 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.976 * Looking for test storage... 00:12:30.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.976 --rc genhtml_branch_coverage=1 00:12:30.976 --rc genhtml_function_coverage=1 00:12:30.976 --rc genhtml_legend=1 00:12:30.976 --rc geninfo_all_blocks=1 00:12:30.976 --rc geninfo_unexecuted_blocks=1 00:12:30.976 00:12:30.976 ' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.976 --rc genhtml_branch_coverage=1 00:12:30.976 --rc genhtml_function_coverage=1 00:12:30.976 --rc genhtml_legend=1 00:12:30.976 --rc geninfo_all_blocks=1 00:12:30.976 --rc geninfo_unexecuted_blocks=1 00:12:30.976 00:12:30.976 ' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.976 --rc genhtml_branch_coverage=1 00:12:30.976 --rc genhtml_function_coverage=1 00:12:30.976 --rc genhtml_legend=1 00:12:30.976 --rc geninfo_all_blocks=1 00:12:30.976 --rc geninfo_unexecuted_blocks=1 00:12:30.976 00:12:30.976 ' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.976 --rc genhtml_branch_coverage=1 00:12:30.976 --rc genhtml_function_coverage=1 00:12:30.976 --rc genhtml_legend=1 00:12:30.976 --rc geninfo_all_blocks=1 00:12:30.976 --rc geninfo_unexecuted_blocks=1 00:12:30.976 00:12:30.976 ' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.976 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.977 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.523 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:12:33.523 00:12:33.523 --- 10.0.0.2 ping statistics --- 00:12:33.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.523 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:12:33.523 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:33.524 00:12:33.524 --- 10.0.0.1 ping statistics --- 00:12:33.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.524 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=1310579 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 1310579 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1310579 ']' 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.524 [2024-11-02 14:28:25.271583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:33.524 [2024-11-02 14:28:25.271685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.524 [2024-11-02 14:28:25.343124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.524 [2024-11-02 14:28:25.438520] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.524 [2024-11-02 14:28:25.438596] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.524 [2024-11-02 14:28:25.438623] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.524 [2024-11-02 14:28:25.438636] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.524 [2024-11-02 14:28:25.438648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.524 [2024-11-02 14:28:25.438716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.524 [2024-11-02 14:28:25.438770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.524 [2024-11-02 14:28:25.438834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.524 [2024-11-02 14:28:25.438837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.524 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.788 [2024-11-02 14:28:25.606084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.788 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.789 [2024-11-02 14:28:25.666464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:33.789 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:36.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.358 rmmod nvme_tcp 00:16:26.358 rmmod nvme_fabrics 00:16:26.358 rmmod nvme_keyring 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 1310579 ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1310579 ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1310579' 00:16:26.358 killing process with pid 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1310579 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.358 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:28.895 00:16:28.895 real 3m57.702s 00:16:28.895 user 15m5.663s 00:16:28.895 sys 0m34.605s 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:28.895 ************************************ 00:16:28.895 END TEST nvmf_connect_disconnect 00:16:28.895 ************************************ 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.895 ************************************ 00:16:28.895 START TEST nvmf_multitarget 00:16:28.895 ************************************ 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:28.895 * Looking for test storage... 00:16:28.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.895 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:28.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.896 --rc genhtml_branch_coverage=1 00:16:28.896 --rc genhtml_function_coverage=1 00:16:28.896 --rc genhtml_legend=1 00:16:28.896 --rc geninfo_all_blocks=1 00:16:28.896 --rc geninfo_unexecuted_blocks=1 00:16:28.896 00:16:28.896 ' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:28.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.896 --rc genhtml_branch_coverage=1 00:16:28.896 --rc genhtml_function_coverage=1 00:16:28.896 --rc genhtml_legend=1 00:16:28.896 --rc geninfo_all_blocks=1 00:16:28.896 --rc geninfo_unexecuted_blocks=1 00:16:28.896 00:16:28.896 ' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:28.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.896 --rc genhtml_branch_coverage=1 00:16:28.896 --rc genhtml_function_coverage=1 00:16:28.896 --rc genhtml_legend=1 00:16:28.896 --rc geninfo_all_blocks=1 00:16:28.896 --rc geninfo_unexecuted_blocks=1 00:16:28.896 00:16:28.896 ' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:28.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.896 --rc genhtml_branch_coverage=1 00:16:28.896 --rc genhtml_function_coverage=1 00:16:28.896 --rc genhtml_legend=1 00:16:28.896 --rc geninfo_all_blocks=1 00:16:28.896 --rc geninfo_unexecuted_blocks=1 00:16:28.896 00:16:28.896 ' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:28.896 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:28.897 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.897 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.806 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:16:30.807 00:16:30.807 --- 10.0.0.2 ping statistics --- 00:16:30.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.807 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:16:30.807 00:16:30.807 --- 10.0.0.1 ping statistics --- 00:16:30.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.807 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=1341748 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 1341748 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1341748 ']' 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.807 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:31.067 [2024-11-02 14:32:22.888841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:31.067 [2024-11-02 14:32:22.888927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.067 [2024-11-02 14:32:22.960124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.067 [2024-11-02 14:32:23.053567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.067 [2024-11-02 14:32:23.053644] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.067 [2024-11-02 14:32:23.053660] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.067 [2024-11-02 14:32:23.053674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.067 [2024-11-02 14:32:23.053686] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.067 [2024-11-02 14:32:23.053754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.067 [2024-11-02 14:32:23.053809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.067 [2024-11-02 14:32:23.053930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.067 [2024-11-02 14:32:23.053933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:31.326 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:31.584 "nvmf_tgt_1" 00:16:31.584 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:31.584 "nvmf_tgt_2" 00:16:31.584 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:31.584 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:31.843 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:31.843 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:31.843 true 00:16:31.843 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:31.843 true 00:16:32.102 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:32.103 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.103 rmmod nvme_tcp 00:16:32.103 rmmod nvme_fabrics 00:16:32.103 rmmod nvme_keyring 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 1341748 ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 1341748 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1341748 ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1341748 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341748 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341748' 00:16:32.103 killing process with pid 1341748 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1341748 00:16:32.103 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1341748 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.362 14:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.955 00:16:34.955 real 0m5.952s 00:16:34.955 user 0m6.745s 00:16:34.955 sys 0m1.994s 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.955 ************************************ 00:16:34.955 END TEST nvmf_multitarget 00:16:34.955 ************************************ 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.955 ************************************ 00:16:34.955 START TEST nvmf_rpc 00:16:34.955 ************************************ 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:34.955 * Looking for test storage... 00:16:34.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:34.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.955 --rc genhtml_branch_coverage=1 00:16:34.955 --rc genhtml_function_coverage=1 00:16:34.955 --rc genhtml_legend=1 00:16:34.955 --rc geninfo_all_blocks=1 00:16:34.955 --rc geninfo_unexecuted_blocks=1 00:16:34.955 00:16:34.955 ' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:34.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.955 --rc genhtml_branch_coverage=1 00:16:34.955 --rc genhtml_function_coverage=1 00:16:34.955 --rc genhtml_legend=1 00:16:34.955 --rc geninfo_all_blocks=1 00:16:34.955 --rc geninfo_unexecuted_blocks=1 00:16:34.955 00:16:34.955 ' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:34.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.955 --rc genhtml_branch_coverage=1 00:16:34.955 --rc genhtml_function_coverage=1 00:16:34.955 --rc genhtml_legend=1 00:16:34.955 --rc geninfo_all_blocks=1 00:16:34.955 --rc geninfo_unexecuted_blocks=1 00:16:34.955 00:16:34.955 ' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:34.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.955 --rc genhtml_branch_coverage=1 00:16:34.955 --rc genhtml_function_coverage=1 00:16:34.955 --rc genhtml_legend=1 00:16:34.955 --rc geninfo_all_blocks=1 00:16:34.955 --rc geninfo_unexecuted_blocks=1 00:16:34.955 00:16:34.955 ' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.955 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.956 14:32:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.860 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.861 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:36.862 00:16:36.862 --- 10.0.0.2 ping statistics --- 00:16:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.862 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:36.862 00:16:36.862 --- 10.0.0.1 ping statistics --- 00:16:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.862 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=1343969 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 1343969 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1343969 ']' 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.862 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.122 [2024-11-02 14:32:28.944609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:37.122 [2024-11-02 14:32:28.944682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.122 [2024-11-02 14:32:29.013823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.122 [2024-11-02 14:32:29.104246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.122 [2024-11-02 14:32:29.104327] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.122 [2024-11-02 14:32:29.104341] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.122 [2024-11-02 14:32:29.104352] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.122 [2024-11-02 14:32:29.104361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.122 [2024-11-02 14:32:29.104426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.122 [2024-11-02 14:32:29.104450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.122 [2024-11-02 14:32:29.104512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.123 [2024-11-02 14:32:29.104515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.381 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.381 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:37.381 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:37.381 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.381 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:37.382 "tick_rate": 2700000000, 00:16:37.382 "poll_groups": [ 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_000", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_001", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_002", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_003", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [] 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 }' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.382 [2024-11-02 14:32:29.363661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:37.382 "tick_rate": 2700000000, 00:16:37.382 "poll_groups": [ 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_000", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [ 00:16:37.382 { 00:16:37.382 "trtype": "TCP" 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_001", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [ 00:16:37.382 { 00:16:37.382 "trtype": "TCP" 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_002", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [ 00:16:37.382 { 00:16:37.382 "trtype": "TCP" 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 }, 00:16:37.382 { 00:16:37.382 "name": "nvmf_tgt_poll_group_003", 00:16:37.382 "admin_qpairs": 0, 00:16:37.382 "io_qpairs": 0, 00:16:37.382 "current_admin_qpairs": 0, 00:16:37.382 "current_io_qpairs": 0, 00:16:37.382 "pending_bdev_io": 0, 00:16:37.382 "completed_nvme_io": 0, 00:16:37.382 "transports": [ 00:16:37.382 { 00:16:37.382 "trtype": "TCP" 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 } 00:16:37.382 ] 00:16:37.382 }' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:37.382 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 Malloc1 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 [2024-11-02 14:32:29.525699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:37.641 [2024-11-02 14:32:29.548403] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:37.641 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:37.641 could not add new controller: failed to write to nvme-fabrics device 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.641 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.577 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.577 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.577 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.577 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.577 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.483 [2024-11-02 14:32:32.397738] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:40.483 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:40.483 could not add new controller: failed to write to nvme-fabrics device 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.483 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.052 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.052 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.052 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.052 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.052 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 [2024-11-02 14:32:35.210281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.154 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.154 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.154 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.154 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.154 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:46.062 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 [2024-11-02 14:32:38.074174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.062 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.001 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.001 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.001 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.001 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.001 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 [2024-11-02 14:32:40.939884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.905 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.163 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.163 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.730 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.730 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:49.730 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.730 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:49.730 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.633 [2024-11-02 14:32:43.686090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.892 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.462 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.462 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:52.462 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.462 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:52.462 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.370 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 [2024-11-02 14:32:46.499986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.629 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.195 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.195 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.195 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.195 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:55.195 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:57.101 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 [2024-11-02 14:32:49.257879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 [2024-11-02 14:32:49.305934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.362 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 [2024-11-02 14:32:49.354100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 [2024-11-02 14:32:49.402293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.363 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 [2024-11-02 14:32:49.450480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.622 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:57.623 "tick_rate": 2700000000, 00:16:57.623 "poll_groups": [ 00:16:57.623 { 00:16:57.623 "name": "nvmf_tgt_poll_group_000", 00:16:57.623 "admin_qpairs": 2, 00:16:57.623 "io_qpairs": 84, 00:16:57.623 "current_admin_qpairs": 0, 00:16:57.623 "current_io_qpairs": 0, 00:16:57.623 "pending_bdev_io": 0, 00:16:57.623 "completed_nvme_io": 232, 00:16:57.623 "transports": [ 00:16:57.623 { 00:16:57.623 "trtype": "TCP" 00:16:57.623 } 00:16:57.623 ] 00:16:57.623 }, 00:16:57.623 { 00:16:57.623 "name": "nvmf_tgt_poll_group_001", 00:16:57.623 "admin_qpairs": 2, 00:16:57.623 "io_qpairs": 84, 00:16:57.623 "current_admin_qpairs": 0, 00:16:57.623 "current_io_qpairs": 0, 00:16:57.623 "pending_bdev_io": 0, 00:16:57.623 "completed_nvme_io": 136, 00:16:57.623 "transports": [ 00:16:57.623 { 00:16:57.623 "trtype": "TCP" 00:16:57.623 } 00:16:57.623 ] 00:16:57.623 }, 00:16:57.623 { 00:16:57.623 "name": "nvmf_tgt_poll_group_002", 00:16:57.623 "admin_qpairs": 1, 00:16:57.623 "io_qpairs": 84, 00:16:57.623 "current_admin_qpairs": 0, 00:16:57.623 "current_io_qpairs": 0, 00:16:57.623 "pending_bdev_io": 0, 00:16:57.623 "completed_nvme_io": 184, 00:16:57.623 "transports": [ 00:16:57.623 { 00:16:57.623 "trtype": "TCP" 00:16:57.623 } 00:16:57.623 ] 00:16:57.623 }, 00:16:57.623 { 00:16:57.623 "name": "nvmf_tgt_poll_group_003", 00:16:57.623 "admin_qpairs": 2, 00:16:57.623 "io_qpairs": 84, 00:16:57.623 "current_admin_qpairs": 0, 00:16:57.623 "current_io_qpairs": 0, 00:16:57.623 "pending_bdev_io": 0, 00:16:57.623 "completed_nvme_io": 134, 00:16:57.623 "transports": [ 00:16:57.623 { 00:16:57.623 "trtype": "TCP" 00:16:57.623 } 00:16:57.623 ] 00:16:57.623 } 00:16:57.623 ] 00:16:57.623 }' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.623 rmmod nvme_tcp 00:16:57.623 rmmod nvme_fabrics 00:16:57.623 rmmod nvme_keyring 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 1343969 ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 1343969 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1343969 ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1343969 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343969 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343969' 00:16:57.623 killing process with pid 1343969 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1343969 00:16:57.623 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1343969 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.882 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.419 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.419 00:17:00.419 real 0m25.529s 00:17:00.419 user 1m22.577s 00:17:00.419 sys 0m4.285s 00:17:00.419 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.419 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.419 ************************************ 00:17:00.419 END TEST nvmf_rpc 00:17:00.419 ************************************ 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.419 ************************************ 00:17:00.419 START TEST nvmf_invalid 00:17:00.419 ************************************ 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:00.419 * Looking for test storage... 00:17:00.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:00.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.419 --rc genhtml_branch_coverage=1 00:17:00.419 --rc genhtml_function_coverage=1 00:17:00.419 --rc genhtml_legend=1 00:17:00.419 --rc geninfo_all_blocks=1 00:17:00.419 --rc geninfo_unexecuted_blocks=1 00:17:00.419 00:17:00.419 ' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:00.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.419 --rc genhtml_branch_coverage=1 00:17:00.419 --rc genhtml_function_coverage=1 00:17:00.419 --rc genhtml_legend=1 00:17:00.419 --rc geninfo_all_blocks=1 00:17:00.419 --rc geninfo_unexecuted_blocks=1 00:17:00.419 00:17:00.419 ' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:00.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.419 --rc genhtml_branch_coverage=1 00:17:00.419 --rc genhtml_function_coverage=1 00:17:00.419 --rc genhtml_legend=1 00:17:00.419 --rc geninfo_all_blocks=1 00:17:00.419 --rc geninfo_unexecuted_blocks=1 00:17:00.419 00:17:00.419 ' 00:17:00.419 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:00.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.420 --rc genhtml_branch_coverage=1 00:17:00.420 --rc genhtml_function_coverage=1 00:17:00.420 --rc genhtml_legend=1 00:17:00.420 --rc geninfo_all_blocks=1 00:17:00.420 --rc geninfo_unexecuted_blocks=1 00:17:00.420 00:17:00.420 ' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.420 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.324 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:02.325 00:17:02.325 --- 10.0.0.2 ping statistics --- 00:17:02.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.325 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:17:02.325 00:17:02.325 --- 10.0.0.1 ping statistics --- 00:17:02.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.325 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=1348477 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 1348477 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1348477 ']' 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.325 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.325 [2024-11-02 14:32:54.362682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.325 [2024-11-02 14:32:54.362761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.584 [2024-11-02 14:32:54.437736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.584 [2024-11-02 14:32:54.533581] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.585 [2024-11-02 14:32:54.533640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.585 [2024-11-02 14:32:54.533657] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.585 [2024-11-02 14:32:54.533670] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.585 [2024-11-02 14:32:54.533682] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.585 [2024-11-02 14:32:54.533740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.585 [2024-11-02 14:32:54.533770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.585 [2024-11-02 14:32:54.533825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.585 [2024-11-02 14:32:54.533828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:02.843 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16243 00:17:03.101 [2024-11-02 14:32:55.000694] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:03.101 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:03.101 { 00:17:03.101 "nqn": "nqn.2016-06.io.spdk:cnode16243", 00:17:03.101 "tgt_name": "foobar", 00:17:03.101 "method": "nvmf_create_subsystem", 00:17:03.101 "req_id": 1 00:17:03.101 } 00:17:03.101 Got JSON-RPC error response 00:17:03.101 response: 00:17:03.101 { 00:17:03.101 "code": -32603, 00:17:03.101 "message": "Unable to find target foobar" 00:17:03.101 }' 00:17:03.101 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:03.101 { 00:17:03.101 "nqn": "nqn.2016-06.io.spdk:cnode16243", 00:17:03.101 "tgt_name": "foobar", 00:17:03.101 "method": "nvmf_create_subsystem", 00:17:03.101 "req_id": 1 00:17:03.101 } 00:17:03.101 Got JSON-RPC error response 00:17:03.101 response: 00:17:03.101 { 00:17:03.101 "code": -32603, 00:17:03.101 "message": "Unable to find target foobar" 00:17:03.101 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:03.101 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:03.101 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15929 00:17:03.359 [2024-11-02 14:32:55.329763] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15929: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:03.359 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:03.359 { 00:17:03.359 "nqn": "nqn.2016-06.io.spdk:cnode15929", 00:17:03.359 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.359 "method": "nvmf_create_subsystem", 00:17:03.359 "req_id": 1 00:17:03.359 } 00:17:03.359 Got JSON-RPC error response 00:17:03.359 response: 00:17:03.359 { 00:17:03.359 "code": -32602, 00:17:03.359 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.359 }' 00:17:03.359 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:03.359 { 00:17:03.359 "nqn": "nqn.2016-06.io.spdk:cnode15929", 00:17:03.359 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.359 "method": "nvmf_create_subsystem", 00:17:03.359 "req_id": 1 00:17:03.359 } 00:17:03.359 Got JSON-RPC error response 00:17:03.359 response: 00:17:03.359 { 00:17:03.359 "code": -32602, 00:17:03.359 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.359 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.359 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:03.359 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20221 00:17:03.618 [2024-11-02 14:32:55.606723] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20221: invalid model number 'SPDK_Controller' 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:03.618 { 00:17:03.618 "nqn": "nqn.2016-06.io.spdk:cnode20221", 00:17:03.618 "model_number": "SPDK_Controller\u001f", 00:17:03.618 "method": "nvmf_create_subsystem", 00:17:03.618 "req_id": 1 00:17:03.618 } 00:17:03.618 Got JSON-RPC error response 00:17:03.618 response: 00:17:03.618 { 00:17:03.618 "code": -32602, 00:17:03.618 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.618 }' 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:03.618 { 00:17:03.618 "nqn": "nqn.2016-06.io.spdk:cnode20221", 00:17:03.618 "model_number": "SPDK_Controller\u001f", 00:17:03.618 "method": "nvmf_create_subsystem", 00:17:03.618 "req_id": 1 00:17:03.618 } 00:17:03.618 Got JSON-RPC error response 00:17:03.618 response: 00:17:03.618 { 00:17:03.618 "code": -32602, 00:17:03.618 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.618 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.618 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.619 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"_.'\''(\pOX9Z6uqCl`3U;r' 00:17:03.877 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '"_.'\''(\pOX9Z6uqCl`3U;r' nqn.2016-06.io.spdk:cnode30698 00:17:04.137 [2024-11-02 14:32:55.939820] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30698: invalid serial number '"_.'(\pOX9Z6uqCl`3U;r' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:04.137 { 00:17:04.137 "nqn": "nqn.2016-06.io.spdk:cnode30698", 00:17:04.137 "serial_number": "\"_.'\''(\\pOX9Z6uqCl`3U;r", 00:17:04.137 "method": "nvmf_create_subsystem", 00:17:04.137 "req_id": 1 00:17:04.137 } 00:17:04.137 Got JSON-RPC error response 00:17:04.137 response: 00:17:04.137 { 00:17:04.137 "code": -32602, 00:17:04.137 "message": "Invalid SN \"_.'\''(\\pOX9Z6uqCl`3U;r" 00:17:04.137 }' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:04.137 { 00:17:04.137 "nqn": "nqn.2016-06.io.spdk:cnode30698", 00:17:04.137 "serial_number": "\"_.'(\\pOX9Z6uqCl`3U;r", 00:17:04.137 "method": "nvmf_create_subsystem", 00:17:04.137 "req_id": 1 00:17:04.137 } 00:17:04.137 Got JSON-RPC error response 00:17:04.137 response: 00:17:04.137 { 00:17:04.137 "code": -32602, 00:17:04.137 "message": "Invalid SN \"_.'(\\pOX9Z6uqCl`3U;r" 00:17:04.137 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:04.137 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:04.137 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:04.137 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.137 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.138 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zKQSHu[A%>wAr,zm+0{0I(Nh\AM8Q3/(]%Z/m7aC~' 00:17:04.139 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'zKQSHu[A%>wAr,zm+0{0I(Nh\AM8Q3/(]%Z/m7aC~' nqn.2016-06.io.spdk:cnode18594 00:17:04.397 [2024-11-02 14:32:56.325092] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18594: invalid model number 'zKQSHu[A%>wAr,zm+0{0I(Nh\AM8Q3/(]%Z/m7aC~' 00:17:04.397 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:04.397 { 00:17:04.397 "nqn": "nqn.2016-06.io.spdk:cnode18594", 00:17:04.397 "model_number": "zKQSHu[A%>wAr,zm+0{0I(Nh\\AM8Q3/(]%Z/m7aC~", 00:17:04.397 "method": "nvmf_create_subsystem", 00:17:04.397 "req_id": 1 00:17:04.397 } 00:17:04.397 Got JSON-RPC error response 00:17:04.397 response: 00:17:04.397 { 00:17:04.397 "code": -32602, 00:17:04.397 "message": "Invalid MN zKQSHu[A%>wAr,zm+0{0I(Nh\\AM8Q3/(]%Z/m7aC~" 00:17:04.397 }' 00:17:04.397 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:04.397 { 00:17:04.397 "nqn": "nqn.2016-06.io.spdk:cnode18594", 00:17:04.397 "model_number": "zKQSHu[A%>wAr,zm+0{0I(Nh\\AM8Q3/(]%Z/m7aC~", 00:17:04.397 "method": "nvmf_create_subsystem", 00:17:04.397 "req_id": 1 00:17:04.397 } 00:17:04.397 Got JSON-RPC error response 00:17:04.397 response: 00:17:04.397 { 00:17:04.397 "code": -32602, 00:17:04.397 "message": "Invalid MN zKQSHu[A%>wAr,zm+0{0I(Nh\\AM8Q3/(]%Z/m7aC~" 00:17:04.397 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:04.397 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:04.655 [2024-11-02 14:32:56.598092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.655 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:04.912 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:04.912 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:04.913 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:04.913 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:04.913 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:05.170 [2024-11-02 14:32:57.147878] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:05.170 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:05.170 { 00:17:05.170 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:05.170 "listen_address": { 00:17:05.170 "trtype": "tcp", 00:17:05.171 "traddr": "", 00:17:05.171 "trsvcid": "4421" 00:17:05.171 }, 00:17:05.171 "method": "nvmf_subsystem_remove_listener", 00:17:05.171 "req_id": 1 00:17:05.171 } 00:17:05.171 Got JSON-RPC error response 00:17:05.171 response: 00:17:05.171 { 00:17:05.171 "code": -32602, 00:17:05.171 "message": "Invalid parameters" 00:17:05.171 }' 00:17:05.171 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:05.171 { 00:17:05.171 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:05.171 "listen_address": { 00:17:05.171 "trtype": "tcp", 00:17:05.171 "traddr": "", 00:17:05.171 "trsvcid": "4421" 00:17:05.171 }, 00:17:05.171 "method": "nvmf_subsystem_remove_listener", 00:17:05.171 "req_id": 1 00:17:05.171 } 00:17:05.171 Got JSON-RPC error response 00:17:05.171 response: 00:17:05.171 { 00:17:05.171 "code": -32602, 00:17:05.171 "message": "Invalid parameters" 00:17:05.171 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:05.171 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28134 -i 0 00:17:05.429 [2024-11-02 14:32:57.416762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28134: invalid cntlid range [0-65519] 00:17:05.429 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:05.429 { 00:17:05.429 "nqn": "nqn.2016-06.io.spdk:cnode28134", 00:17:05.429 "min_cntlid": 0, 00:17:05.429 "method": "nvmf_create_subsystem", 00:17:05.429 "req_id": 1 00:17:05.429 } 00:17:05.429 Got JSON-RPC error response 00:17:05.429 response: 00:17:05.429 { 00:17:05.429 "code": -32602, 00:17:05.429 "message": "Invalid cntlid range [0-65519]" 00:17:05.429 }' 00:17:05.429 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:05.429 { 00:17:05.429 "nqn": "nqn.2016-06.io.spdk:cnode28134", 00:17:05.429 "min_cntlid": 0, 00:17:05.429 "method": "nvmf_create_subsystem", 00:17:05.429 "req_id": 1 00:17:05.429 } 00:17:05.429 Got JSON-RPC error response 00:17:05.429 response: 00:17:05.429 { 00:17:05.429 "code": -32602, 00:17:05.429 "message": "Invalid cntlid range [0-65519]" 00:17:05.429 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.429 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3092 -i 65520 00:17:05.687 [2024-11-02 14:32:57.689644] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3092: invalid cntlid range [65520-65519] 00:17:05.687 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:05.687 { 00:17:05.687 "nqn": "nqn.2016-06.io.spdk:cnode3092", 00:17:05.687 "min_cntlid": 65520, 00:17:05.687 "method": "nvmf_create_subsystem", 00:17:05.687 "req_id": 1 00:17:05.687 } 00:17:05.687 Got JSON-RPC error response 00:17:05.687 response: 00:17:05.687 { 00:17:05.687 "code": -32602, 00:17:05.687 "message": "Invalid cntlid range [65520-65519]" 00:17:05.687 }' 00:17:05.687 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:05.687 { 00:17:05.687 "nqn": "nqn.2016-06.io.spdk:cnode3092", 00:17:05.687 "min_cntlid": 65520, 00:17:05.687 "method": "nvmf_create_subsystem", 00:17:05.687 "req_id": 1 00:17:05.687 } 00:17:05.687 Got JSON-RPC error response 00:17:05.687 response: 00:17:05.687 { 00:17:05.687 "code": -32602, 00:17:05.687 "message": "Invalid cntlid range [65520-65519]" 00:17:05.687 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.687 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17167 -I 0 00:17:05.945 [2024-11-02 14:32:57.954508] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17167: invalid cntlid range [1-0] 00:17:05.945 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:05.945 { 00:17:05.945 "nqn": "nqn.2016-06.io.spdk:cnode17167", 00:17:05.945 "max_cntlid": 0, 00:17:05.945 "method": "nvmf_create_subsystem", 00:17:05.945 "req_id": 1 00:17:05.945 } 00:17:05.945 Got JSON-RPC error response 00:17:05.945 response: 00:17:05.945 { 00:17:05.945 "code": -32602, 00:17:05.945 "message": "Invalid cntlid range [1-0]" 00:17:05.945 }' 00:17:05.945 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:05.945 { 00:17:05.945 "nqn": "nqn.2016-06.io.spdk:cnode17167", 00:17:05.945 "max_cntlid": 0, 00:17:05.945 "method": "nvmf_create_subsystem", 00:17:05.945 "req_id": 1 00:17:05.945 } 00:17:05.945 Got JSON-RPC error response 00:17:05.945 response: 00:17:05.945 { 00:17:05.945 "code": -32602, 00:17:05.945 "message": "Invalid cntlid range [1-0]" 00:17:05.945 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.945 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19535 -I 65520 00:17:06.204 [2024-11-02 14:32:58.227385] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19535: invalid cntlid range [1-65520] 00:17:06.204 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:06.204 { 00:17:06.204 "nqn": "nqn.2016-06.io.spdk:cnode19535", 00:17:06.204 "max_cntlid": 65520, 00:17:06.204 "method": "nvmf_create_subsystem", 00:17:06.204 "req_id": 1 00:17:06.204 } 00:17:06.204 Got JSON-RPC error response 00:17:06.204 response: 00:17:06.204 { 00:17:06.204 "code": -32602, 00:17:06.204 "message": "Invalid cntlid range [1-65520]" 00:17:06.204 }' 00:17:06.204 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:06.204 { 00:17:06.204 "nqn": "nqn.2016-06.io.spdk:cnode19535", 00:17:06.204 "max_cntlid": 65520, 00:17:06.204 "method": "nvmf_create_subsystem", 00:17:06.204 "req_id": 1 00:17:06.204 } 00:17:06.204 Got JSON-RPC error response 00:17:06.204 response: 00:17:06.204 { 00:17:06.204 "code": -32602, 00:17:06.204 "message": "Invalid cntlid range [1-65520]" 00:17:06.204 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.204 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8914 -i 6 -I 5 00:17:06.463 [2024-11-02 14:32:58.500319] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8914: invalid cntlid range [6-5] 00:17:06.722 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:06.722 { 00:17:06.722 "nqn": "nqn.2016-06.io.spdk:cnode8914", 00:17:06.722 "min_cntlid": 6, 00:17:06.722 "max_cntlid": 5, 00:17:06.722 "method": "nvmf_create_subsystem", 00:17:06.722 "req_id": 1 00:17:06.722 } 00:17:06.722 Got JSON-RPC error response 00:17:06.722 response: 00:17:06.722 { 00:17:06.722 "code": -32602, 00:17:06.723 "message": "Invalid cntlid range [6-5]" 00:17:06.723 }' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:06.723 { 00:17:06.723 "nqn": "nqn.2016-06.io.spdk:cnode8914", 00:17:06.723 "min_cntlid": 6, 00:17:06.723 "max_cntlid": 5, 00:17:06.723 "method": "nvmf_create_subsystem", 00:17:06.723 "req_id": 1 00:17:06.723 } 00:17:06.723 Got JSON-RPC error response 00:17:06.723 response: 00:17:06.723 { 00:17:06.723 "code": -32602, 00:17:06.723 "message": "Invalid cntlid range [6-5]" 00:17:06.723 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:06.723 { 00:17:06.723 "name": "foobar", 00:17:06.723 "method": "nvmf_delete_target", 00:17:06.723 "req_id": 1 00:17:06.723 } 00:17:06.723 Got JSON-RPC error response 00:17:06.723 response: 00:17:06.723 { 00:17:06.723 "code": -32602, 00:17:06.723 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:06.723 }' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:06.723 { 00:17:06.723 "name": "foobar", 00:17:06.723 "method": "nvmf_delete_target", 00:17:06.723 "req_id": 1 00:17:06.723 } 00:17:06.723 Got JSON-RPC error response 00:17:06.723 response: 00:17:06.723 { 00:17:06.723 "code": -32602, 00:17:06.723 "message": "The specified target doesn't exist, cannot delete it." 00:17:06.723 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.723 rmmod nvme_tcp 00:17:06.723 rmmod nvme_fabrics 00:17:06.723 rmmod nvme_keyring 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 1348477 ']' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 1348477 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1348477 ']' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1348477 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348477 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348477' 00:17:06.723 killing process with pid 1348477 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1348477 00:17:06.723 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1348477 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.981 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:09.542 00:17:09.542 real 0m8.984s 00:17:09.542 user 0m21.679s 00:17:09.542 sys 0m2.430s 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.542 ************************************ 00:17:09.542 END TEST nvmf_invalid 00:17:09.542 ************************************ 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.542 ************************************ 00:17:09.542 START TEST nvmf_connect_stress 00:17:09.542 ************************************ 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:09.542 * Looking for test storage... 00:17:09.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:09.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.542 --rc genhtml_branch_coverage=1 00:17:09.542 --rc genhtml_function_coverage=1 00:17:09.542 --rc genhtml_legend=1 00:17:09.542 --rc geninfo_all_blocks=1 00:17:09.542 --rc geninfo_unexecuted_blocks=1 00:17:09.542 00:17:09.542 ' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:09.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.542 --rc genhtml_branch_coverage=1 00:17:09.542 --rc genhtml_function_coverage=1 00:17:09.542 --rc genhtml_legend=1 00:17:09.542 --rc geninfo_all_blocks=1 00:17:09.542 --rc geninfo_unexecuted_blocks=1 00:17:09.542 00:17:09.542 ' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:09.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.542 --rc genhtml_branch_coverage=1 00:17:09.542 --rc genhtml_function_coverage=1 00:17:09.542 --rc genhtml_legend=1 00:17:09.542 --rc geninfo_all_blocks=1 00:17:09.542 --rc geninfo_unexecuted_blocks=1 00:17:09.542 00:17:09.542 ' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:09.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.542 --rc genhtml_branch_coverage=1 00:17:09.542 --rc genhtml_function_coverage=1 00:17:09.542 --rc genhtml_legend=1 00:17:09.542 --rc geninfo_all_blocks=1 00:17:09.542 --rc geninfo_unexecuted_blocks=1 00:17:09.542 00:17:09.542 ' 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.542 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:09.543 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:11.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.471 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:11.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:11.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:11.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:17:11.472 00:17:11.472 --- 10.0.0.2 ping statistics --- 00:17:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.472 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:17:11.472 00:17:11.472 --- 10.0.0.1 ping statistics --- 00:17:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.472 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=1351226 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 1351226 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1351226 ']' 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.472 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.472 [2024-11-02 14:33:03.411121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:11.472 [2024-11-02 14:33:03.411204] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.472 [2024-11-02 14:33:03.480793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.731 [2024-11-02 14:33:03.574906] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.731 [2024-11-02 14:33:03.574974] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.731 [2024-11-02 14:33:03.574998] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.731 [2024-11-02 14:33:03.575019] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.731 [2024-11-02 14:33:03.575037] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.731 [2024-11-02 14:33:03.575143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.731 [2024-11-02 14:33:03.575210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.731 [2024-11-02 14:33:03.575216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 [2024-11-02 14:33:03.727987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 [2024-11-02 14:33:03.763448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 NULL1 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1351259 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.731 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.991 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.251 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.251 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:12.251 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.251 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.251 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.510 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.510 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:12.510 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.510 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.510 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.771 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.771 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:12.771 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.771 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.771 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.339 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.339 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:13.339 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.339 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.339 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.598 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.598 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:13.598 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.598 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.598 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.858 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.858 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:13.858 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.858 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.858 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.117 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.117 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:14.117 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.117 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.117 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.375 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.375 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:14.375 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.375 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.375 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.943 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.943 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:14.943 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.943 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.943 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.202 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.202 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:15.202 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.202 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.202 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.461 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.461 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:15.461 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.461 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.461 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.718 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.719 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:15.719 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.719 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.719 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.976 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.976 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:15.976 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.976 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.976 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.544 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.545 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:16.545 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.545 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.545 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.803 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.803 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:16.803 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.803 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.803 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.061 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.061 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:17.061 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.061 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.061 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.319 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:17.319 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.319 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.319 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.577 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.577 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:17.577 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.577 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.577 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.146 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.146 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:18.146 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.146 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.146 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.405 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.405 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:18.405 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.405 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.405 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.663 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:18.663 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.663 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.663 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.921 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.921 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:18.921 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.921 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.921 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.180 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.180 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:19.180 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.180 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.180 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.749 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.749 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:19.749 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.749 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.749 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.009 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.009 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:20.009 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.009 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.009 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.268 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.268 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:20.268 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.268 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.268 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.526 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.526 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:20.526 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.526 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.526 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.784 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.784 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:20.784 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.784 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.784 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.353 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.353 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:21.353 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.353 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.353 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.613 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.613 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:21.613 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.613 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.613 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.872 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:21.872 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.872 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.872 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1351259 00:17:22.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1351259) - No such process 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1351259 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.131 rmmod nvme_tcp 00:17:22.131 rmmod nvme_fabrics 00:17:22.131 rmmod nvme_keyring 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 1351226 ']' 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 1351226 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1351226 ']' 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1351226 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:22.131 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351226 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351226' 00:17:22.390 killing process with pid 1351226 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1351226 00:17:22.390 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1351226 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.651 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:24.556 00:17:24.556 real 0m15.462s 00:17:24.556 user 0m38.646s 00:17:24.556 sys 0m5.868s 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 ************************************ 00:17:24.556 END TEST nvmf_connect_stress 00:17:24.556 ************************************ 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 ************************************ 00:17:24.556 START TEST nvmf_fused_ordering 00:17:24.556 ************************************ 00:17:24.556 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:24.814 * Looking for test storage... 00:17:24.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.814 --rc genhtml_branch_coverage=1 00:17:24.814 --rc genhtml_function_coverage=1 00:17:24.814 --rc genhtml_legend=1 00:17:24.814 --rc geninfo_all_blocks=1 00:17:24.814 --rc geninfo_unexecuted_blocks=1 00:17:24.814 00:17:24.814 ' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.814 --rc genhtml_branch_coverage=1 00:17:24.814 --rc genhtml_function_coverage=1 00:17:24.814 --rc genhtml_legend=1 00:17:24.814 --rc geninfo_all_blocks=1 00:17:24.814 --rc geninfo_unexecuted_blocks=1 00:17:24.814 00:17:24.814 ' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.814 --rc genhtml_branch_coverage=1 00:17:24.814 --rc genhtml_function_coverage=1 00:17:24.814 --rc genhtml_legend=1 00:17:24.814 --rc geninfo_all_blocks=1 00:17:24.814 --rc geninfo_unexecuted_blocks=1 00:17:24.814 00:17:24.814 ' 00:17:24.814 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.814 --rc genhtml_branch_coverage=1 00:17:24.814 --rc genhtml_function_coverage=1 00:17:24.814 --rc genhtml_legend=1 00:17:24.814 --rc geninfo_all_blocks=1 00:17:24.814 --rc geninfo_unexecuted_blocks=1 00:17:24.814 00:17:24.815 ' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.815 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.723 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:26.724 00:17:26.724 --- 10.0.0.2 ping statistics --- 00:17:26.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.724 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:26.724 00:17:26.724 --- 10.0.0.1 ping statistics --- 00:17:26.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.724 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=1355028 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 1355028 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1355028 ']' 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.724 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.984 [2024-11-02 14:33:18.785108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:26.984 [2024-11-02 14:33:18.785177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.984 [2024-11-02 14:33:18.848408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.984 [2024-11-02 14:33:18.932793] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.984 [2024-11-02 14:33:18.932861] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.984 [2024-11-02 14:33:18.932890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.984 [2024-11-02 14:33:18.932901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.984 [2024-11-02 14:33:18.932911] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.984 [2024-11-02 14:33:18.932948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 [2024-11-02 14:33:19.084213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 [2024-11-02 14:33:19.100461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 NULL1 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.244 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:27.244 [2024-11-02 14:33:19.147135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.244 [2024-11-02 14:33:19.147176] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355048 ] 00:17:27.814 Attached to nqn.2016-06.io.spdk:cnode1 00:17:27.814 Namespace ID: 1 size: 1GB 00:17:27.814 fused_ordering(0) 00:17:27.814 fused_ordering(1) 00:17:27.814 fused_ordering(2) 00:17:27.814 fused_ordering(3) 00:17:27.814 fused_ordering(4) 00:17:27.814 fused_ordering(5) 00:17:27.814 fused_ordering(6) 00:17:27.814 fused_ordering(7) 00:17:27.814 fused_ordering(8) 00:17:27.814 fused_ordering(9) 00:17:27.814 fused_ordering(10) 00:17:27.814 fused_ordering(11) 00:17:27.814 fused_ordering(12) 00:17:27.814 fused_ordering(13) 00:17:27.814 fused_ordering(14) 00:17:27.814 fused_ordering(15) 00:17:27.814 fused_ordering(16) 00:17:27.814 fused_ordering(17) 00:17:27.814 fused_ordering(18) 00:17:27.814 fused_ordering(19) 00:17:27.814 fused_ordering(20) 00:17:27.814 fused_ordering(21) 00:17:27.814 fused_ordering(22) 00:17:27.814 fused_ordering(23) 00:17:27.814 fused_ordering(24) 00:17:27.814 fused_ordering(25) 00:17:27.814 fused_ordering(26) 00:17:27.814 fused_ordering(27) 00:17:27.814 fused_ordering(28) 00:17:27.814 fused_ordering(29) 00:17:27.814 fused_ordering(30) 00:17:27.814 fused_ordering(31) 00:17:27.814 fused_ordering(32) 00:17:27.814 fused_ordering(33) 00:17:27.814 fused_ordering(34) 00:17:27.814 fused_ordering(35) 00:17:27.814 fused_ordering(36) 00:17:27.814 fused_ordering(37) 00:17:27.814 fused_ordering(38) 00:17:27.814 fused_ordering(39) 00:17:27.814 fused_ordering(40) 00:17:27.814 fused_ordering(41) 00:17:27.814 fused_ordering(42) 00:17:27.814 fused_ordering(43) 00:17:27.814 fused_ordering(44) 00:17:27.814 fused_ordering(45) 00:17:27.814 fused_ordering(46) 00:17:27.814 fused_ordering(47) 00:17:27.814 fused_ordering(48) 00:17:27.814 fused_ordering(49) 00:17:27.814 fused_ordering(50) 00:17:27.814 fused_ordering(51) 00:17:27.814 fused_ordering(52) 00:17:27.814 fused_ordering(53) 00:17:27.814 fused_ordering(54) 00:17:27.814 fused_ordering(55) 00:17:27.814 fused_ordering(56) 00:17:27.814 fused_ordering(57) 00:17:27.814 fused_ordering(58) 00:17:27.814 fused_ordering(59) 00:17:27.814 fused_ordering(60) 00:17:27.814 fused_ordering(61) 00:17:27.814 fused_ordering(62) 00:17:27.814 fused_ordering(63) 00:17:27.814 fused_ordering(64) 00:17:27.814 fused_ordering(65) 00:17:27.814 fused_ordering(66) 00:17:27.814 fused_ordering(67) 00:17:27.814 fused_ordering(68) 00:17:27.814 fused_ordering(69) 00:17:27.814 fused_ordering(70) 00:17:27.814 fused_ordering(71) 00:17:27.814 fused_ordering(72) 00:17:27.814 fused_ordering(73) 00:17:27.814 fused_ordering(74) 00:17:27.814 fused_ordering(75) 00:17:27.814 fused_ordering(76) 00:17:27.814 fused_ordering(77) 00:17:27.814 fused_ordering(78) 00:17:27.814 fused_ordering(79) 00:17:27.814 fused_ordering(80) 00:17:27.814 fused_ordering(81) 00:17:27.814 fused_ordering(82) 00:17:27.814 fused_ordering(83) 00:17:27.814 fused_ordering(84) 00:17:27.814 fused_ordering(85) 00:17:27.814 fused_ordering(86) 00:17:27.814 fused_ordering(87) 00:17:27.814 fused_ordering(88) 00:17:27.814 fused_ordering(89) 00:17:27.814 fused_ordering(90) 00:17:27.814 fused_ordering(91) 00:17:27.814 fused_ordering(92) 00:17:27.814 fused_ordering(93) 00:17:27.814 fused_ordering(94) 00:17:27.814 fused_ordering(95) 00:17:27.814 fused_ordering(96) 00:17:27.814 fused_ordering(97) 00:17:27.814 fused_ordering(98) 00:17:27.814 fused_ordering(99) 00:17:27.814 fused_ordering(100) 00:17:27.814 fused_ordering(101) 00:17:27.814 fused_ordering(102) 00:17:27.814 fused_ordering(103) 00:17:27.814 fused_ordering(104) 00:17:27.814 fused_ordering(105) 00:17:27.814 fused_ordering(106) 00:17:27.814 fused_ordering(107) 00:17:27.814 fused_ordering(108) 00:17:27.814 fused_ordering(109) 00:17:27.814 fused_ordering(110) 00:17:27.814 fused_ordering(111) 00:17:27.814 fused_ordering(112) 00:17:27.814 fused_ordering(113) 00:17:27.814 fused_ordering(114) 00:17:27.814 fused_ordering(115) 00:17:27.814 fused_ordering(116) 00:17:27.814 fused_ordering(117) 00:17:27.814 fused_ordering(118) 00:17:27.814 fused_ordering(119) 00:17:27.814 fused_ordering(120) 00:17:27.814 fused_ordering(121) 00:17:27.814 fused_ordering(122) 00:17:27.814 fused_ordering(123) 00:17:27.814 fused_ordering(124) 00:17:27.814 fused_ordering(125) 00:17:27.814 fused_ordering(126) 00:17:27.814 fused_ordering(127) 00:17:27.814 fused_ordering(128) 00:17:27.814 fused_ordering(129) 00:17:27.814 fused_ordering(130) 00:17:27.814 fused_ordering(131) 00:17:27.814 fused_ordering(132) 00:17:27.814 fused_ordering(133) 00:17:27.814 fused_ordering(134) 00:17:27.814 fused_ordering(135) 00:17:27.814 fused_ordering(136) 00:17:27.814 fused_ordering(137) 00:17:27.814 fused_ordering(138) 00:17:27.814 fused_ordering(139) 00:17:27.814 fused_ordering(140) 00:17:27.814 fused_ordering(141) 00:17:27.814 fused_ordering(142) 00:17:27.814 fused_ordering(143) 00:17:27.814 fused_ordering(144) 00:17:27.814 fused_ordering(145) 00:17:27.814 fused_ordering(146) 00:17:27.814 fused_ordering(147) 00:17:27.814 fused_ordering(148) 00:17:27.814 fused_ordering(149) 00:17:27.814 fused_ordering(150) 00:17:27.814 fused_ordering(151) 00:17:27.814 fused_ordering(152) 00:17:27.814 fused_ordering(153) 00:17:27.814 fused_ordering(154) 00:17:27.814 fused_ordering(155) 00:17:27.814 fused_ordering(156) 00:17:27.814 fused_ordering(157) 00:17:27.814 fused_ordering(158) 00:17:27.814 fused_ordering(159) 00:17:27.814 fused_ordering(160) 00:17:27.814 fused_ordering(161) 00:17:27.814 fused_ordering(162) 00:17:27.814 fused_ordering(163) 00:17:27.814 fused_ordering(164) 00:17:27.814 fused_ordering(165) 00:17:27.814 fused_ordering(166) 00:17:27.814 fused_ordering(167) 00:17:27.814 fused_ordering(168) 00:17:27.814 fused_ordering(169) 00:17:27.814 fused_ordering(170) 00:17:27.814 fused_ordering(171) 00:17:27.814 fused_ordering(172) 00:17:27.814 fused_ordering(173) 00:17:27.814 fused_ordering(174) 00:17:27.814 fused_ordering(175) 00:17:27.814 fused_ordering(176) 00:17:27.814 fused_ordering(177) 00:17:27.814 fused_ordering(178) 00:17:27.814 fused_ordering(179) 00:17:27.814 fused_ordering(180) 00:17:27.814 fused_ordering(181) 00:17:27.814 fused_ordering(182) 00:17:27.814 fused_ordering(183) 00:17:27.814 fused_ordering(184) 00:17:27.814 fused_ordering(185) 00:17:27.814 fused_ordering(186) 00:17:27.814 fused_ordering(187) 00:17:27.814 fused_ordering(188) 00:17:27.814 fused_ordering(189) 00:17:27.814 fused_ordering(190) 00:17:27.814 fused_ordering(191) 00:17:27.814 fused_ordering(192) 00:17:27.814 fused_ordering(193) 00:17:27.814 fused_ordering(194) 00:17:27.814 fused_ordering(195) 00:17:27.814 fused_ordering(196) 00:17:27.814 fused_ordering(197) 00:17:27.814 fused_ordering(198) 00:17:27.814 fused_ordering(199) 00:17:27.814 fused_ordering(200) 00:17:27.814 fused_ordering(201) 00:17:27.814 fused_ordering(202) 00:17:27.814 fused_ordering(203) 00:17:27.814 fused_ordering(204) 00:17:27.814 fused_ordering(205) 00:17:28.075 fused_ordering(206) 00:17:28.075 fused_ordering(207) 00:17:28.075 fused_ordering(208) 00:17:28.075 fused_ordering(209) 00:17:28.075 fused_ordering(210) 00:17:28.075 fused_ordering(211) 00:17:28.075 fused_ordering(212) 00:17:28.075 fused_ordering(213) 00:17:28.075 fused_ordering(214) 00:17:28.075 fused_ordering(215) 00:17:28.075 fused_ordering(216) 00:17:28.075 fused_ordering(217) 00:17:28.075 fused_ordering(218) 00:17:28.075 fused_ordering(219) 00:17:28.075 fused_ordering(220) 00:17:28.075 fused_ordering(221) 00:17:28.075 fused_ordering(222) 00:17:28.075 fused_ordering(223) 00:17:28.075 fused_ordering(224) 00:17:28.075 fused_ordering(225) 00:17:28.075 fused_ordering(226) 00:17:28.075 fused_ordering(227) 00:17:28.075 fused_ordering(228) 00:17:28.075 fused_ordering(229) 00:17:28.075 fused_ordering(230) 00:17:28.075 fused_ordering(231) 00:17:28.075 fused_ordering(232) 00:17:28.075 fused_ordering(233) 00:17:28.075 fused_ordering(234) 00:17:28.075 fused_ordering(235) 00:17:28.075 fused_ordering(236) 00:17:28.075 fused_ordering(237) 00:17:28.075 fused_ordering(238) 00:17:28.075 fused_ordering(239) 00:17:28.075 fused_ordering(240) 00:17:28.075 fused_ordering(241) 00:17:28.075 fused_ordering(242) 00:17:28.075 fused_ordering(243) 00:17:28.075 fused_ordering(244) 00:17:28.075 fused_ordering(245) 00:17:28.075 fused_ordering(246) 00:17:28.075 fused_ordering(247) 00:17:28.075 fused_ordering(248) 00:17:28.075 fused_ordering(249) 00:17:28.075 fused_ordering(250) 00:17:28.075 fused_ordering(251) 00:17:28.075 fused_ordering(252) 00:17:28.075 fused_ordering(253) 00:17:28.075 fused_ordering(254) 00:17:28.075 fused_ordering(255) 00:17:28.075 fused_ordering(256) 00:17:28.075 fused_ordering(257) 00:17:28.075 fused_ordering(258) 00:17:28.075 fused_ordering(259) 00:17:28.075 fused_ordering(260) 00:17:28.075 fused_ordering(261) 00:17:28.075 fused_ordering(262) 00:17:28.075 fused_ordering(263) 00:17:28.075 fused_ordering(264) 00:17:28.075 fused_ordering(265) 00:17:28.075 fused_ordering(266) 00:17:28.075 fused_ordering(267) 00:17:28.075 fused_ordering(268) 00:17:28.075 fused_ordering(269) 00:17:28.075 fused_ordering(270) 00:17:28.075 fused_ordering(271) 00:17:28.075 fused_ordering(272) 00:17:28.075 fused_ordering(273) 00:17:28.075 fused_ordering(274) 00:17:28.075 fused_ordering(275) 00:17:28.075 fused_ordering(276) 00:17:28.075 fused_ordering(277) 00:17:28.075 fused_ordering(278) 00:17:28.075 fused_ordering(279) 00:17:28.075 fused_ordering(280) 00:17:28.075 fused_ordering(281) 00:17:28.075 fused_ordering(282) 00:17:28.075 fused_ordering(283) 00:17:28.075 fused_ordering(284) 00:17:28.075 fused_ordering(285) 00:17:28.076 fused_ordering(286) 00:17:28.076 fused_ordering(287) 00:17:28.076 fused_ordering(288) 00:17:28.076 fused_ordering(289) 00:17:28.076 fused_ordering(290) 00:17:28.076 fused_ordering(291) 00:17:28.076 fused_ordering(292) 00:17:28.076 fused_ordering(293) 00:17:28.076 fused_ordering(294) 00:17:28.076 fused_ordering(295) 00:17:28.076 fused_ordering(296) 00:17:28.076 fused_ordering(297) 00:17:28.076 fused_ordering(298) 00:17:28.076 fused_ordering(299) 00:17:28.076 fused_ordering(300) 00:17:28.076 fused_ordering(301) 00:17:28.076 fused_ordering(302) 00:17:28.076 fused_ordering(303) 00:17:28.076 fused_ordering(304) 00:17:28.076 fused_ordering(305) 00:17:28.076 fused_ordering(306) 00:17:28.076 fused_ordering(307) 00:17:28.076 fused_ordering(308) 00:17:28.076 fused_ordering(309) 00:17:28.076 fused_ordering(310) 00:17:28.076 fused_ordering(311) 00:17:28.076 fused_ordering(312) 00:17:28.076 fused_ordering(313) 00:17:28.076 fused_ordering(314) 00:17:28.076 fused_ordering(315) 00:17:28.076 fused_ordering(316) 00:17:28.076 fused_ordering(317) 00:17:28.076 fused_ordering(318) 00:17:28.076 fused_ordering(319) 00:17:28.076 fused_ordering(320) 00:17:28.076 fused_ordering(321) 00:17:28.076 fused_ordering(322) 00:17:28.076 fused_ordering(323) 00:17:28.076 fused_ordering(324) 00:17:28.076 fused_ordering(325) 00:17:28.076 fused_ordering(326) 00:17:28.076 fused_ordering(327) 00:17:28.076 fused_ordering(328) 00:17:28.076 fused_ordering(329) 00:17:28.076 fused_ordering(330) 00:17:28.076 fused_ordering(331) 00:17:28.076 fused_ordering(332) 00:17:28.076 fused_ordering(333) 00:17:28.076 fused_ordering(334) 00:17:28.076 fused_ordering(335) 00:17:28.076 fused_ordering(336) 00:17:28.076 fused_ordering(337) 00:17:28.076 fused_ordering(338) 00:17:28.076 fused_ordering(339) 00:17:28.076 fused_ordering(340) 00:17:28.076 fused_ordering(341) 00:17:28.076 fused_ordering(342) 00:17:28.076 fused_ordering(343) 00:17:28.076 fused_ordering(344) 00:17:28.076 fused_ordering(345) 00:17:28.076 fused_ordering(346) 00:17:28.076 fused_ordering(347) 00:17:28.076 fused_ordering(348) 00:17:28.076 fused_ordering(349) 00:17:28.076 fused_ordering(350) 00:17:28.076 fused_ordering(351) 00:17:28.076 fused_ordering(352) 00:17:28.076 fused_ordering(353) 00:17:28.076 fused_ordering(354) 00:17:28.076 fused_ordering(355) 00:17:28.076 fused_ordering(356) 00:17:28.076 fused_ordering(357) 00:17:28.076 fused_ordering(358) 00:17:28.076 fused_ordering(359) 00:17:28.076 fused_ordering(360) 00:17:28.076 fused_ordering(361) 00:17:28.076 fused_ordering(362) 00:17:28.076 fused_ordering(363) 00:17:28.076 fused_ordering(364) 00:17:28.076 fused_ordering(365) 00:17:28.076 fused_ordering(366) 00:17:28.076 fused_ordering(367) 00:17:28.076 fused_ordering(368) 00:17:28.076 fused_ordering(369) 00:17:28.076 fused_ordering(370) 00:17:28.076 fused_ordering(371) 00:17:28.076 fused_ordering(372) 00:17:28.076 fused_ordering(373) 00:17:28.076 fused_ordering(374) 00:17:28.076 fused_ordering(375) 00:17:28.076 fused_ordering(376) 00:17:28.076 fused_ordering(377) 00:17:28.076 fused_ordering(378) 00:17:28.076 fused_ordering(379) 00:17:28.076 fused_ordering(380) 00:17:28.076 fused_ordering(381) 00:17:28.076 fused_ordering(382) 00:17:28.076 fused_ordering(383) 00:17:28.076 fused_ordering(384) 00:17:28.076 fused_ordering(385) 00:17:28.076 fused_ordering(386) 00:17:28.076 fused_ordering(387) 00:17:28.076 fused_ordering(388) 00:17:28.076 fused_ordering(389) 00:17:28.076 fused_ordering(390) 00:17:28.076 fused_ordering(391) 00:17:28.076 fused_ordering(392) 00:17:28.076 fused_ordering(393) 00:17:28.076 fused_ordering(394) 00:17:28.076 fused_ordering(395) 00:17:28.076 fused_ordering(396) 00:17:28.076 fused_ordering(397) 00:17:28.076 fused_ordering(398) 00:17:28.076 fused_ordering(399) 00:17:28.076 fused_ordering(400) 00:17:28.076 fused_ordering(401) 00:17:28.076 fused_ordering(402) 00:17:28.076 fused_ordering(403) 00:17:28.076 fused_ordering(404) 00:17:28.076 fused_ordering(405) 00:17:28.076 fused_ordering(406) 00:17:28.076 fused_ordering(407) 00:17:28.076 fused_ordering(408) 00:17:28.076 fused_ordering(409) 00:17:28.076 fused_ordering(410) 00:17:28.646 fused_ordering(411) 00:17:28.646 fused_ordering(412) 00:17:28.646 fused_ordering(413) 00:17:28.646 fused_ordering(414) 00:17:28.646 fused_ordering(415) 00:17:28.646 fused_ordering(416) 00:17:28.646 fused_ordering(417) 00:17:28.646 fused_ordering(418) 00:17:28.646 fused_ordering(419) 00:17:28.646 fused_ordering(420) 00:17:28.646 fused_ordering(421) 00:17:28.646 fused_ordering(422) 00:17:28.646 fused_ordering(423) 00:17:28.646 fused_ordering(424) 00:17:28.646 fused_ordering(425) 00:17:28.646 fused_ordering(426) 00:17:28.646 fused_ordering(427) 00:17:28.646 fused_ordering(428) 00:17:28.646 fused_ordering(429) 00:17:28.646 fused_ordering(430) 00:17:28.646 fused_ordering(431) 00:17:28.646 fused_ordering(432) 00:17:28.646 fused_ordering(433) 00:17:28.646 fused_ordering(434) 00:17:28.646 fused_ordering(435) 00:17:28.646 fused_ordering(436) 00:17:28.646 fused_ordering(437) 00:17:28.646 fused_ordering(438) 00:17:28.646 fused_ordering(439) 00:17:28.646 fused_ordering(440) 00:17:28.646 fused_ordering(441) 00:17:28.646 fused_ordering(442) 00:17:28.646 fused_ordering(443) 00:17:28.646 fused_ordering(444) 00:17:28.646 fused_ordering(445) 00:17:28.646 fused_ordering(446) 00:17:28.646 fused_ordering(447) 00:17:28.646 fused_ordering(448) 00:17:28.646 fused_ordering(449) 00:17:28.646 fused_ordering(450) 00:17:28.646 fused_ordering(451) 00:17:28.646 fused_ordering(452) 00:17:28.646 fused_ordering(453) 00:17:28.646 fused_ordering(454) 00:17:28.646 fused_ordering(455) 00:17:28.646 fused_ordering(456) 00:17:28.646 fused_ordering(457) 00:17:28.646 fused_ordering(458) 00:17:28.646 fused_ordering(459) 00:17:28.646 fused_ordering(460) 00:17:28.646 fused_ordering(461) 00:17:28.646 fused_ordering(462) 00:17:28.646 fused_ordering(463) 00:17:28.646 fused_ordering(464) 00:17:28.646 fused_ordering(465) 00:17:28.646 fused_ordering(466) 00:17:28.646 fused_ordering(467) 00:17:28.646 fused_ordering(468) 00:17:28.646 fused_ordering(469) 00:17:28.646 fused_ordering(470) 00:17:28.646 fused_ordering(471) 00:17:28.646 fused_ordering(472) 00:17:28.646 fused_ordering(473) 00:17:28.646 fused_ordering(474) 00:17:28.646 fused_ordering(475) 00:17:28.646 fused_ordering(476) 00:17:28.646 fused_ordering(477) 00:17:28.646 fused_ordering(478) 00:17:28.646 fused_ordering(479) 00:17:28.646 fused_ordering(480) 00:17:28.646 fused_ordering(481) 00:17:28.646 fused_ordering(482) 00:17:28.646 fused_ordering(483) 00:17:28.646 fused_ordering(484) 00:17:28.646 fused_ordering(485) 00:17:28.646 fused_ordering(486) 00:17:28.646 fused_ordering(487) 00:17:28.646 fused_ordering(488) 00:17:28.646 fused_ordering(489) 00:17:28.646 fused_ordering(490) 00:17:28.646 fused_ordering(491) 00:17:28.646 fused_ordering(492) 00:17:28.646 fused_ordering(493) 00:17:28.646 fused_ordering(494) 00:17:28.646 fused_ordering(495) 00:17:28.646 fused_ordering(496) 00:17:28.646 fused_ordering(497) 00:17:28.646 fused_ordering(498) 00:17:28.646 fused_ordering(499) 00:17:28.646 fused_ordering(500) 00:17:28.646 fused_ordering(501) 00:17:28.646 fused_ordering(502) 00:17:28.646 fused_ordering(503) 00:17:28.646 fused_ordering(504) 00:17:28.646 fused_ordering(505) 00:17:28.646 fused_ordering(506) 00:17:28.646 fused_ordering(507) 00:17:28.646 fused_ordering(508) 00:17:28.646 fused_ordering(509) 00:17:28.646 fused_ordering(510) 00:17:28.646 fused_ordering(511) 00:17:28.646 fused_ordering(512) 00:17:28.646 fused_ordering(513) 00:17:28.646 fused_ordering(514) 00:17:28.646 fused_ordering(515) 00:17:28.646 fused_ordering(516) 00:17:28.646 fused_ordering(517) 00:17:28.646 fused_ordering(518) 00:17:28.646 fused_ordering(519) 00:17:28.646 fused_ordering(520) 00:17:28.646 fused_ordering(521) 00:17:28.646 fused_ordering(522) 00:17:28.646 fused_ordering(523) 00:17:28.646 fused_ordering(524) 00:17:28.646 fused_ordering(525) 00:17:28.646 fused_ordering(526) 00:17:28.646 fused_ordering(527) 00:17:28.646 fused_ordering(528) 00:17:28.646 fused_ordering(529) 00:17:28.646 fused_ordering(530) 00:17:28.646 fused_ordering(531) 00:17:28.646 fused_ordering(532) 00:17:28.646 fused_ordering(533) 00:17:28.646 fused_ordering(534) 00:17:28.646 fused_ordering(535) 00:17:28.646 fused_ordering(536) 00:17:28.646 fused_ordering(537) 00:17:28.646 fused_ordering(538) 00:17:28.646 fused_ordering(539) 00:17:28.646 fused_ordering(540) 00:17:28.646 fused_ordering(541) 00:17:28.646 fused_ordering(542) 00:17:28.646 fused_ordering(543) 00:17:28.646 fused_ordering(544) 00:17:28.646 fused_ordering(545) 00:17:28.646 fused_ordering(546) 00:17:28.646 fused_ordering(547) 00:17:28.646 fused_ordering(548) 00:17:28.646 fused_ordering(549) 00:17:28.646 fused_ordering(550) 00:17:28.646 fused_ordering(551) 00:17:28.646 fused_ordering(552) 00:17:28.646 fused_ordering(553) 00:17:28.646 fused_ordering(554) 00:17:28.646 fused_ordering(555) 00:17:28.646 fused_ordering(556) 00:17:28.646 fused_ordering(557) 00:17:28.646 fused_ordering(558) 00:17:28.646 fused_ordering(559) 00:17:28.646 fused_ordering(560) 00:17:28.646 fused_ordering(561) 00:17:28.646 fused_ordering(562) 00:17:28.646 fused_ordering(563) 00:17:28.646 fused_ordering(564) 00:17:28.646 fused_ordering(565) 00:17:28.646 fused_ordering(566) 00:17:28.646 fused_ordering(567) 00:17:28.646 fused_ordering(568) 00:17:28.646 fused_ordering(569) 00:17:28.646 fused_ordering(570) 00:17:28.646 fused_ordering(571) 00:17:28.646 fused_ordering(572) 00:17:28.646 fused_ordering(573) 00:17:28.646 fused_ordering(574) 00:17:28.646 fused_ordering(575) 00:17:28.646 fused_ordering(576) 00:17:28.646 fused_ordering(577) 00:17:28.646 fused_ordering(578) 00:17:28.646 fused_ordering(579) 00:17:28.646 fused_ordering(580) 00:17:28.646 fused_ordering(581) 00:17:28.646 fused_ordering(582) 00:17:28.646 fused_ordering(583) 00:17:28.646 fused_ordering(584) 00:17:28.646 fused_ordering(585) 00:17:28.647 fused_ordering(586) 00:17:28.647 fused_ordering(587) 00:17:28.647 fused_ordering(588) 00:17:28.647 fused_ordering(589) 00:17:28.647 fused_ordering(590) 00:17:28.647 fused_ordering(591) 00:17:28.647 fused_ordering(592) 00:17:28.647 fused_ordering(593) 00:17:28.647 fused_ordering(594) 00:17:28.647 fused_ordering(595) 00:17:28.647 fused_ordering(596) 00:17:28.647 fused_ordering(597) 00:17:28.647 fused_ordering(598) 00:17:28.647 fused_ordering(599) 00:17:28.647 fused_ordering(600) 00:17:28.647 fused_ordering(601) 00:17:28.647 fused_ordering(602) 00:17:28.647 fused_ordering(603) 00:17:28.647 fused_ordering(604) 00:17:28.647 fused_ordering(605) 00:17:28.647 fused_ordering(606) 00:17:28.647 fused_ordering(607) 00:17:28.647 fused_ordering(608) 00:17:28.647 fused_ordering(609) 00:17:28.647 fused_ordering(610) 00:17:28.647 fused_ordering(611) 00:17:28.647 fused_ordering(612) 00:17:28.647 fused_ordering(613) 00:17:28.647 fused_ordering(614) 00:17:28.647 fused_ordering(615) 00:17:29.584 fused_ordering(616) 00:17:29.584 fused_ordering(617) 00:17:29.584 fused_ordering(618) 00:17:29.584 fused_ordering(619) 00:17:29.584 fused_ordering(620) 00:17:29.584 fused_ordering(621) 00:17:29.584 fused_ordering(622) 00:17:29.584 fused_ordering(623) 00:17:29.584 fused_ordering(624) 00:17:29.584 fused_ordering(625) 00:17:29.584 fused_ordering(626) 00:17:29.584 fused_ordering(627) 00:17:29.584 fused_ordering(628) 00:17:29.584 fused_ordering(629) 00:17:29.584 fused_ordering(630) 00:17:29.584 fused_ordering(631) 00:17:29.584 fused_ordering(632) 00:17:29.584 fused_ordering(633) 00:17:29.584 fused_ordering(634) 00:17:29.584 fused_ordering(635) 00:17:29.584 fused_ordering(636) 00:17:29.584 fused_ordering(637) 00:17:29.584 fused_ordering(638) 00:17:29.584 fused_ordering(639) 00:17:29.584 fused_ordering(640) 00:17:29.584 fused_ordering(641) 00:17:29.584 fused_ordering(642) 00:17:29.584 fused_ordering(643) 00:17:29.584 fused_ordering(644) 00:17:29.584 fused_ordering(645) 00:17:29.584 fused_ordering(646) 00:17:29.584 fused_ordering(647) 00:17:29.584 fused_ordering(648) 00:17:29.584 fused_ordering(649) 00:17:29.584 fused_ordering(650) 00:17:29.584 fused_ordering(651) 00:17:29.584 fused_ordering(652) 00:17:29.584 fused_ordering(653) 00:17:29.584 fused_ordering(654) 00:17:29.584 fused_ordering(655) 00:17:29.584 fused_ordering(656) 00:17:29.584 fused_ordering(657) 00:17:29.584 fused_ordering(658) 00:17:29.584 fused_ordering(659) 00:17:29.584 fused_ordering(660) 00:17:29.584 fused_ordering(661) 00:17:29.584 fused_ordering(662) 00:17:29.584 fused_ordering(663) 00:17:29.584 fused_ordering(664) 00:17:29.584 fused_ordering(665) 00:17:29.584 fused_ordering(666) 00:17:29.584 fused_ordering(667) 00:17:29.584 fused_ordering(668) 00:17:29.584 fused_ordering(669) 00:17:29.584 fused_ordering(670) 00:17:29.584 fused_ordering(671) 00:17:29.584 fused_ordering(672) 00:17:29.584 fused_ordering(673) 00:17:29.584 fused_ordering(674) 00:17:29.584 fused_ordering(675) 00:17:29.584 fused_ordering(676) 00:17:29.584 fused_ordering(677) 00:17:29.584 fused_ordering(678) 00:17:29.584 fused_ordering(679) 00:17:29.584 fused_ordering(680) 00:17:29.584 fused_ordering(681) 00:17:29.584 fused_ordering(682) 00:17:29.584 fused_ordering(683) 00:17:29.584 fused_ordering(684) 00:17:29.584 fused_ordering(685) 00:17:29.584 fused_ordering(686) 00:17:29.584 fused_ordering(687) 00:17:29.584 fused_ordering(688) 00:17:29.584 fused_ordering(689) 00:17:29.584 fused_ordering(690) 00:17:29.584 fused_ordering(691) 00:17:29.584 fused_ordering(692) 00:17:29.584 fused_ordering(693) 00:17:29.584 fused_ordering(694) 00:17:29.584 fused_ordering(695) 00:17:29.584 fused_ordering(696) 00:17:29.584 fused_ordering(697) 00:17:29.584 fused_ordering(698) 00:17:29.584 fused_ordering(699) 00:17:29.584 fused_ordering(700) 00:17:29.584 fused_ordering(701) 00:17:29.584 fused_ordering(702) 00:17:29.584 fused_ordering(703) 00:17:29.584 fused_ordering(704) 00:17:29.584 fused_ordering(705) 00:17:29.584 fused_ordering(706) 00:17:29.584 fused_ordering(707) 00:17:29.584 fused_ordering(708) 00:17:29.584 fused_ordering(709) 00:17:29.584 fused_ordering(710) 00:17:29.584 fused_ordering(711) 00:17:29.584 fused_ordering(712) 00:17:29.584 fused_ordering(713) 00:17:29.584 fused_ordering(714) 00:17:29.584 fused_ordering(715) 00:17:29.584 fused_ordering(716) 00:17:29.584 fused_ordering(717) 00:17:29.584 fused_ordering(718) 00:17:29.584 fused_ordering(719) 00:17:29.584 fused_ordering(720) 00:17:29.584 fused_ordering(721) 00:17:29.584 fused_ordering(722) 00:17:29.584 fused_ordering(723) 00:17:29.584 fused_ordering(724) 00:17:29.584 fused_ordering(725) 00:17:29.584 fused_ordering(726) 00:17:29.584 fused_ordering(727) 00:17:29.584 fused_ordering(728) 00:17:29.584 fused_ordering(729) 00:17:29.584 fused_ordering(730) 00:17:29.584 fused_ordering(731) 00:17:29.584 fused_ordering(732) 00:17:29.584 fused_ordering(733) 00:17:29.584 fused_ordering(734) 00:17:29.584 fused_ordering(735) 00:17:29.584 fused_ordering(736) 00:17:29.584 fused_ordering(737) 00:17:29.584 fused_ordering(738) 00:17:29.584 fused_ordering(739) 00:17:29.584 fused_ordering(740) 00:17:29.584 fused_ordering(741) 00:17:29.584 fused_ordering(742) 00:17:29.584 fused_ordering(743) 00:17:29.584 fused_ordering(744) 00:17:29.584 fused_ordering(745) 00:17:29.584 fused_ordering(746) 00:17:29.584 fused_ordering(747) 00:17:29.584 fused_ordering(748) 00:17:29.584 fused_ordering(749) 00:17:29.584 fused_ordering(750) 00:17:29.584 fused_ordering(751) 00:17:29.584 fused_ordering(752) 00:17:29.584 fused_ordering(753) 00:17:29.584 fused_ordering(754) 00:17:29.584 fused_ordering(755) 00:17:29.584 fused_ordering(756) 00:17:29.584 fused_ordering(757) 00:17:29.584 fused_ordering(758) 00:17:29.584 fused_ordering(759) 00:17:29.584 fused_ordering(760) 00:17:29.584 fused_ordering(761) 00:17:29.584 fused_ordering(762) 00:17:29.584 fused_ordering(763) 00:17:29.584 fused_ordering(764) 00:17:29.584 fused_ordering(765) 00:17:29.584 fused_ordering(766) 00:17:29.584 fused_ordering(767) 00:17:29.584 fused_ordering(768) 00:17:29.584 fused_ordering(769) 00:17:29.584 fused_ordering(770) 00:17:29.584 fused_ordering(771) 00:17:29.584 fused_ordering(772) 00:17:29.584 fused_ordering(773) 00:17:29.584 fused_ordering(774) 00:17:29.584 fused_ordering(775) 00:17:29.584 fused_ordering(776) 00:17:29.584 fused_ordering(777) 00:17:29.584 fused_ordering(778) 00:17:29.584 fused_ordering(779) 00:17:29.584 fused_ordering(780) 00:17:29.584 fused_ordering(781) 00:17:29.584 fused_ordering(782) 00:17:29.584 fused_ordering(783) 00:17:29.585 fused_ordering(784) 00:17:29.585 fused_ordering(785) 00:17:29.585 fused_ordering(786) 00:17:29.585 fused_ordering(787) 00:17:29.585 fused_ordering(788) 00:17:29.585 fused_ordering(789) 00:17:29.585 fused_ordering(790) 00:17:29.585 fused_ordering(791) 00:17:29.585 fused_ordering(792) 00:17:29.585 fused_ordering(793) 00:17:29.585 fused_ordering(794) 00:17:29.585 fused_ordering(795) 00:17:29.585 fused_ordering(796) 00:17:29.585 fused_ordering(797) 00:17:29.585 fused_ordering(798) 00:17:29.585 fused_ordering(799) 00:17:29.585 fused_ordering(800) 00:17:29.585 fused_ordering(801) 00:17:29.585 fused_ordering(802) 00:17:29.585 fused_ordering(803) 00:17:29.585 fused_ordering(804) 00:17:29.585 fused_ordering(805) 00:17:29.585 fused_ordering(806) 00:17:29.585 fused_ordering(807) 00:17:29.585 fused_ordering(808) 00:17:29.585 fused_ordering(809) 00:17:29.585 fused_ordering(810) 00:17:29.585 fused_ordering(811) 00:17:29.585 fused_ordering(812) 00:17:29.585 fused_ordering(813) 00:17:29.585 fused_ordering(814) 00:17:29.585 fused_ordering(815) 00:17:29.585 fused_ordering(816) 00:17:29.585 fused_ordering(817) 00:17:29.585 fused_ordering(818) 00:17:29.585 fused_ordering(819) 00:17:29.585 fused_ordering(820) 00:17:30.152 fused_ordering(821) 00:17:30.152 fused_ordering(822) 00:17:30.152 fused_ordering(823) 00:17:30.152 fused_ordering(824) 00:17:30.152 fused_ordering(825) 00:17:30.152 fused_ordering(826) 00:17:30.152 fused_ordering(827) 00:17:30.152 fused_ordering(828) 00:17:30.152 fused_ordering(829) 00:17:30.152 fused_ordering(830) 00:17:30.152 fused_ordering(831) 00:17:30.152 fused_ordering(832) 00:17:30.152 fused_ordering(833) 00:17:30.152 fused_ordering(834) 00:17:30.152 fused_ordering(835) 00:17:30.152 fused_ordering(836) 00:17:30.152 fused_ordering(837) 00:17:30.152 fused_ordering(838) 00:17:30.152 fused_ordering(839) 00:17:30.152 fused_ordering(840) 00:17:30.152 fused_ordering(841) 00:17:30.152 fused_ordering(842) 00:17:30.152 fused_ordering(843) 00:17:30.152 fused_ordering(844) 00:17:30.152 fused_ordering(845) 00:17:30.152 fused_ordering(846) 00:17:30.152 fused_ordering(847) 00:17:30.152 fused_ordering(848) 00:17:30.152 fused_ordering(849) 00:17:30.152 fused_ordering(850) 00:17:30.152 fused_ordering(851) 00:17:30.152 fused_ordering(852) 00:17:30.152 fused_ordering(853) 00:17:30.152 fused_ordering(854) 00:17:30.152 fused_ordering(855) 00:17:30.152 fused_ordering(856) 00:17:30.152 fused_ordering(857) 00:17:30.152 fused_ordering(858) 00:17:30.152 fused_ordering(859) 00:17:30.152 fused_ordering(860) 00:17:30.152 fused_ordering(861) 00:17:30.152 fused_ordering(862) 00:17:30.152 fused_ordering(863) 00:17:30.152 fused_ordering(864) 00:17:30.152 fused_ordering(865) 00:17:30.152 fused_ordering(866) 00:17:30.152 fused_ordering(867) 00:17:30.152 fused_ordering(868) 00:17:30.152 fused_ordering(869) 00:17:30.152 fused_ordering(870) 00:17:30.152 fused_ordering(871) 00:17:30.152 fused_ordering(872) 00:17:30.152 fused_ordering(873) 00:17:30.152 fused_ordering(874) 00:17:30.152 fused_ordering(875) 00:17:30.152 fused_ordering(876) 00:17:30.152 fused_ordering(877) 00:17:30.152 fused_ordering(878) 00:17:30.152 fused_ordering(879) 00:17:30.152 fused_ordering(880) 00:17:30.152 fused_ordering(881) 00:17:30.152 fused_ordering(882) 00:17:30.152 fused_ordering(883) 00:17:30.152 fused_ordering(884) 00:17:30.152 fused_ordering(885) 00:17:30.152 fused_ordering(886) 00:17:30.152 fused_ordering(887) 00:17:30.152 fused_ordering(888) 00:17:30.152 fused_ordering(889) 00:17:30.152 fused_ordering(890) 00:17:30.152 fused_ordering(891) 00:17:30.152 fused_ordering(892) 00:17:30.152 fused_ordering(893) 00:17:30.152 fused_ordering(894) 00:17:30.152 fused_ordering(895) 00:17:30.152 fused_ordering(896) 00:17:30.152 fused_ordering(897) 00:17:30.152 fused_ordering(898) 00:17:30.152 fused_ordering(899) 00:17:30.152 fused_ordering(900) 00:17:30.152 fused_ordering(901) 00:17:30.152 fused_ordering(902) 00:17:30.152 fused_ordering(903) 00:17:30.152 fused_ordering(904) 00:17:30.152 fused_ordering(905) 00:17:30.152 fused_ordering(906) 00:17:30.152 fused_ordering(907) 00:17:30.152 fused_ordering(908) 00:17:30.152 fused_ordering(909) 00:17:30.152 fused_ordering(910) 00:17:30.152 fused_ordering(911) 00:17:30.152 fused_ordering(912) 00:17:30.152 fused_ordering(913) 00:17:30.152 fused_ordering(914) 00:17:30.152 fused_ordering(915) 00:17:30.152 fused_ordering(916) 00:17:30.152 fused_ordering(917) 00:17:30.152 fused_ordering(918) 00:17:30.152 fused_ordering(919) 00:17:30.152 fused_ordering(920) 00:17:30.152 fused_ordering(921) 00:17:30.152 fused_ordering(922) 00:17:30.152 fused_ordering(923) 00:17:30.152 fused_ordering(924) 00:17:30.152 fused_ordering(925) 00:17:30.152 fused_ordering(926) 00:17:30.152 fused_ordering(927) 00:17:30.152 fused_ordering(928) 00:17:30.152 fused_ordering(929) 00:17:30.152 fused_ordering(930) 00:17:30.152 fused_ordering(931) 00:17:30.152 fused_ordering(932) 00:17:30.152 fused_ordering(933) 00:17:30.152 fused_ordering(934) 00:17:30.152 fused_ordering(935) 00:17:30.152 fused_ordering(936) 00:17:30.152 fused_ordering(937) 00:17:30.152 fused_ordering(938) 00:17:30.152 fused_ordering(939) 00:17:30.152 fused_ordering(940) 00:17:30.152 fused_ordering(941) 00:17:30.152 fused_ordering(942) 00:17:30.152 fused_ordering(943) 00:17:30.152 fused_ordering(944) 00:17:30.152 fused_ordering(945) 00:17:30.152 fused_ordering(946) 00:17:30.152 fused_ordering(947) 00:17:30.152 fused_ordering(948) 00:17:30.152 fused_ordering(949) 00:17:30.152 fused_ordering(950) 00:17:30.152 fused_ordering(951) 00:17:30.152 fused_ordering(952) 00:17:30.152 fused_ordering(953) 00:17:30.152 fused_ordering(954) 00:17:30.152 fused_ordering(955) 00:17:30.152 fused_ordering(956) 00:17:30.152 fused_ordering(957) 00:17:30.152 fused_ordering(958) 00:17:30.152 fused_ordering(959) 00:17:30.152 fused_ordering(960) 00:17:30.152 fused_ordering(961) 00:17:30.152 fused_ordering(962) 00:17:30.152 fused_ordering(963) 00:17:30.152 fused_ordering(964) 00:17:30.152 fused_ordering(965) 00:17:30.152 fused_ordering(966) 00:17:30.152 fused_ordering(967) 00:17:30.152 fused_ordering(968) 00:17:30.152 fused_ordering(969) 00:17:30.152 fused_ordering(970) 00:17:30.152 fused_ordering(971) 00:17:30.152 fused_ordering(972) 00:17:30.152 fused_ordering(973) 00:17:30.152 fused_ordering(974) 00:17:30.152 fused_ordering(975) 00:17:30.152 fused_ordering(976) 00:17:30.152 fused_ordering(977) 00:17:30.152 fused_ordering(978) 00:17:30.152 fused_ordering(979) 00:17:30.152 fused_ordering(980) 00:17:30.152 fused_ordering(981) 00:17:30.152 fused_ordering(982) 00:17:30.152 fused_ordering(983) 00:17:30.152 fused_ordering(984) 00:17:30.152 fused_ordering(985) 00:17:30.152 fused_ordering(986) 00:17:30.152 fused_ordering(987) 00:17:30.152 fused_ordering(988) 00:17:30.152 fused_ordering(989) 00:17:30.153 fused_ordering(990) 00:17:30.153 fused_ordering(991) 00:17:30.153 fused_ordering(992) 00:17:30.153 fused_ordering(993) 00:17:30.153 fused_ordering(994) 00:17:30.153 fused_ordering(995) 00:17:30.153 fused_ordering(996) 00:17:30.153 fused_ordering(997) 00:17:30.153 fused_ordering(998) 00:17:30.153 fused_ordering(999) 00:17:30.153 fused_ordering(1000) 00:17:30.153 fused_ordering(1001) 00:17:30.153 fused_ordering(1002) 00:17:30.153 fused_ordering(1003) 00:17:30.153 fused_ordering(1004) 00:17:30.153 fused_ordering(1005) 00:17:30.153 fused_ordering(1006) 00:17:30.153 fused_ordering(1007) 00:17:30.153 fused_ordering(1008) 00:17:30.153 fused_ordering(1009) 00:17:30.153 fused_ordering(1010) 00:17:30.153 fused_ordering(1011) 00:17:30.153 fused_ordering(1012) 00:17:30.153 fused_ordering(1013) 00:17:30.153 fused_ordering(1014) 00:17:30.153 fused_ordering(1015) 00:17:30.153 fused_ordering(1016) 00:17:30.153 fused_ordering(1017) 00:17:30.153 fused_ordering(1018) 00:17:30.153 fused_ordering(1019) 00:17:30.153 fused_ordering(1020) 00:17:30.153 fused_ordering(1021) 00:17:30.153 fused_ordering(1022) 00:17:30.153 fused_ordering(1023) 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.153 rmmod nvme_tcp 00:17:30.153 rmmod nvme_fabrics 00:17:30.153 rmmod nvme_keyring 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 1355028 ']' 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 1355028 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1355028 ']' 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1355028 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.153 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1355028 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1355028' 00:17:30.413 killing process with pid 1355028 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1355028 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1355028 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:30.413 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:30.674 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.674 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.674 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.674 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.674 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.580 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:32.581 00:17:32.581 real 0m7.938s 00:17:32.581 user 0m5.541s 00:17:32.581 sys 0m3.663s 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:32.581 ************************************ 00:17:32.581 END TEST nvmf_fused_ordering 00:17:32.581 ************************************ 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.581 ************************************ 00:17:32.581 START TEST nvmf_ns_masking 00:17:32.581 ************************************ 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:32.581 * Looking for test storage... 00:17:32.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:32.581 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:32.839 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:32.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.840 --rc genhtml_branch_coverage=1 00:17:32.840 --rc genhtml_function_coverage=1 00:17:32.840 --rc genhtml_legend=1 00:17:32.840 --rc geninfo_all_blocks=1 00:17:32.840 --rc geninfo_unexecuted_blocks=1 00:17:32.840 00:17:32.840 ' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:32.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.840 --rc genhtml_branch_coverage=1 00:17:32.840 --rc genhtml_function_coverage=1 00:17:32.840 --rc genhtml_legend=1 00:17:32.840 --rc geninfo_all_blocks=1 00:17:32.840 --rc geninfo_unexecuted_blocks=1 00:17:32.840 00:17:32.840 ' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:32.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.840 --rc genhtml_branch_coverage=1 00:17:32.840 --rc genhtml_function_coverage=1 00:17:32.840 --rc genhtml_legend=1 00:17:32.840 --rc geninfo_all_blocks=1 00:17:32.840 --rc geninfo_unexecuted_blocks=1 00:17:32.840 00:17:32.840 ' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:32.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.840 --rc genhtml_branch_coverage=1 00:17:32.840 --rc genhtml_function_coverage=1 00:17:32.840 --rc genhtml_legend=1 00:17:32.840 --rc geninfo_all_blocks=1 00:17:32.840 --rc geninfo_unexecuted_blocks=1 00:17:32.840 00:17:32.840 ' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dd4f22b2-cae7-41a1-9d14-2bda52c28186 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8f971645-f285-4eaa-91a1-894d456c4df4 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ac3338ad-fa0f-41e8-af9c-4df483fb23f4 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:32.840 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.841 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:34.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:34.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:34.746 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:34.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:34.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.747 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:35.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:17:35.007 00:17:35.007 --- 10.0.0.2 ping statistics --- 00:17:35.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.007 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:35.007 00:17:35.007 --- 10.0.0.1 ping statistics --- 00:17:35.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.007 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=1357385 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 1357385 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1357385 ']' 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.007 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.007 [2024-11-02 14:33:26.948018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:35.007 [2024-11-02 14:33:26.948122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.007 [2024-11-02 14:33:27.014348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.266 [2024-11-02 14:33:27.105022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.266 [2024-11-02 14:33:27.105084] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.266 [2024-11-02 14:33:27.105113] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.266 [2024-11-02 14:33:27.105125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.266 [2024-11-02 14:33:27.105134] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.266 [2024-11-02 14:33:27.105161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.266 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:35.525 [2024-11-02 14:33:27.552651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.525 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:35.525 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:35.525 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:36.091 Malloc1 00:17:36.091 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:36.348 Malloc2 00:17:36.349 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:36.606 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:36.864 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.123 [2024-11-02 14:33:29.149782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.123 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:37.123 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac3338ad-fa0f-41e8-af9c-4df483fb23f4 -a 10.0.0.2 -s 4420 -i 4 00:17:37.381 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.381 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.381 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.381 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:37.381 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.921 [ 0]:0x1 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a9830f2e26742f78429ea954c47dcab 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a9830f2e26742f78429ea954c47dcab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.921 [ 0]:0x1 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a9830f2e26742f78429ea954c47dcab 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a9830f2e26742f78429ea954c47dcab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.921 [ 1]:0x2 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:39.921 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.180 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.438 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:40.696 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:40.696 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac3338ad-fa0f-41e8-af9c-4df483fb23f4 -a 10.0.0.2 -s 4420 -i 4 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:40.955 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.934 [ 0]:0x2 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.934 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.192 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:43.192 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.192 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.450 [ 0]:0x1 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a9830f2e26742f78429ea954c47dcab 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a9830f2e26742f78429ea954c47dcab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.450 [ 1]:0x2 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.450 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.710 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.968 [ 0]:0x2 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.968 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:44.228 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:44.228 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac3338ad-fa0f-41e8-af9c-4df483fb23f4 -a 10.0.0.2 -s 4420 -i 4 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:44.488 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:46.396 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.655 [ 0]:0x1 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a9830f2e26742f78429ea954c47dcab 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a9830f2e26742f78429ea954c47dcab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.655 [ 1]:0x2 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.655 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.221 [ 0]:0x2 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.221 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.481 [2024-11-02 14:33:39.417451] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:47.481 request: 00:17:47.481 { 00:17:47.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.481 "nsid": 2, 00:17:47.481 "host": "nqn.2016-06.io.spdk:host1", 00:17:47.481 "method": "nvmf_ns_remove_host", 00:17:47.481 "req_id": 1 00:17:47.481 } 00:17:47.481 Got JSON-RPC error response 00:17:47.481 response: 00:17:47.481 { 00:17:47.481 "code": -32602, 00:17:47.481 "message": "Invalid parameters" 00:17:47.481 } 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.481 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.748 [ 0]:0x2 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73ad2d333bdb4e998c7732525e6bde9e 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73ad2d333bdb4e998c7732525e6bde9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1359005 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:47.748 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1359005 /var/tmp/host.sock 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1359005 ']' 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.749 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.749 [2024-11-02 14:33:39.771541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:47.749 [2024-11-02 14:33:39.771628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359005 ] 00:17:48.009 [2024-11-02 14:33:39.836252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.009 [2024-11-02 14:33:39.930032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.268 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.268 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:48.268 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:48.526 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:48.784 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dd4f22b2-cae7-41a1-9d14-2bda52c28186 00:17:48.784 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:48.784 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DD4F22B2CAE741A19D142BDA52C28186 -i 00:17:49.041 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8f971645-f285-4eaa-91a1-894d456c4df4 00:17:49.041 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:49.041 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8F971645F2854EAA91A1894D456C4DF4 -i 00:17:49.299 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.557 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:50.125 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:50.125 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:50.384 nvme0n1 00:17:50.384 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:50.384 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:50.951 nvme1n2 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:50.951 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dd4f22b2-cae7-41a1-9d14-2bda52c28186 == \d\d\4\f\2\2\b\2\-\c\a\e\7\-\4\1\a\1\-\9\d\1\4\-\2\b\d\a\5\2\c\2\8\1\8\6 ]] 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8f971645-f285-4eaa-91a1-894d456c4df4 == \8\f\9\7\1\6\4\5\-\f\2\8\5\-\4\e\a\a\-\9\1\a\1\-\8\9\4\d\4\5\6\c\4\d\f\4 ]] 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1359005 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1359005 ']' 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1359005 00:17:51.517 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:51.774 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.774 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359005 00:17:51.774 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:51.774 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:51.774 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359005' 00:17:51.774 killing process with pid 1359005 00:17:51.775 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1359005 00:17:51.775 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1359005 00:17:52.033 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.291 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.291 rmmod nvme_tcp 00:17:52.291 rmmod nvme_fabrics 00:17:52.291 rmmod nvme_keyring 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 1357385 ']' 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 1357385 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1357385 ']' 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1357385 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1357385 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1357385' 00:17:52.550 killing process with pid 1357385 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1357385 00:17:52.550 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1357385 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:52.808 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.809 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:52.809 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.809 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.809 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.715 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:54.715 00:17:54.715 real 0m22.184s 00:17:54.715 user 0m29.480s 00:17:54.715 sys 0m4.194s 00:17:54.715 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.715 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.715 ************************************ 00:17:54.715 END TEST nvmf_ns_masking 00:17:54.715 ************************************ 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.974 ************************************ 00:17:54.974 START TEST nvmf_nvme_cli 00:17:54.974 ************************************ 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:54.974 * Looking for test storage... 00:17:54.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:54.974 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:54.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.975 --rc genhtml_branch_coverage=1 00:17:54.975 --rc genhtml_function_coverage=1 00:17:54.975 --rc genhtml_legend=1 00:17:54.975 --rc geninfo_all_blocks=1 00:17:54.975 --rc geninfo_unexecuted_blocks=1 00:17:54.975 00:17:54.975 ' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:54.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.975 --rc genhtml_branch_coverage=1 00:17:54.975 --rc genhtml_function_coverage=1 00:17:54.975 --rc genhtml_legend=1 00:17:54.975 --rc geninfo_all_blocks=1 00:17:54.975 --rc geninfo_unexecuted_blocks=1 00:17:54.975 00:17:54.975 ' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:54.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.975 --rc genhtml_branch_coverage=1 00:17:54.975 --rc genhtml_function_coverage=1 00:17:54.975 --rc genhtml_legend=1 00:17:54.975 --rc geninfo_all_blocks=1 00:17:54.975 --rc geninfo_unexecuted_blocks=1 00:17:54.975 00:17:54.975 ' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:54.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.975 --rc genhtml_branch_coverage=1 00:17:54.975 --rc genhtml_function_coverage=1 00:17:54.975 --rc genhtml_legend=1 00:17:54.975 --rc geninfo_all_blocks=1 00:17:54.975 --rc geninfo_unexecuted_blocks=1 00:17:54.975 00:17:54.975 ' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.975 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.508 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.508 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.508 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:57.508 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.508 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.508 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:57.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:17:57.509 00:17:57.509 --- 10.0.0.2 ping statistics --- 00:17:57.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.509 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:17:57.509 00:17:57.509 --- 10.0.0.1 ping statistics --- 00:17:57.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.509 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=1361558 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 1361558 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1361558 ']' 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.509 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.509 [2024-11-02 14:33:49.324002] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:57.509 [2024-11-02 14:33:49.324082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.509 [2024-11-02 14:33:49.394951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.509 [2024-11-02 14:33:49.490671] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.509 [2024-11-02 14:33:49.490728] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.509 [2024-11-02 14:33:49.490744] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.509 [2024-11-02 14:33:49.490757] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.509 [2024-11-02 14:33:49.490769] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.509 [2024-11-02 14:33:49.490845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.509 [2024-11-02 14:33:49.490904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.509 [2024-11-02 14:33:49.490939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.509 [2024-11-02 14:33:49.490942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 [2024-11-02 14:33:49.646623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 Malloc0 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 Malloc1 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 [2024-11-02 14:33:49.727885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:58.029 00:17:58.029 Discovery Log Number of Records 2, Generation counter 2 00:17:58.029 =====Discovery Log Entry 0====== 00:17:58.029 trtype: tcp 00:17:58.029 adrfam: ipv4 00:17:58.029 subtype: current discovery subsystem 00:17:58.029 treq: not required 00:17:58.029 portid: 0 00:17:58.029 trsvcid: 4420 00:17:58.029 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:58.029 traddr: 10.0.0.2 00:17:58.029 eflags: explicit discovery connections, duplicate discovery information 00:17:58.029 sectype: none 00:17:58.029 =====Discovery Log Entry 1====== 00:17:58.029 trtype: tcp 00:17:58.029 adrfam: ipv4 00:17:58.029 subtype: nvme subsystem 00:17:58.029 treq: not required 00:17:58.029 portid: 0 00:17:58.029 trsvcid: 4420 00:17:58.029 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:58.029 traddr: 10.0.0.2 00:17:58.029 eflags: none 00:17:58.029 sectype: none 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:58.029 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:58.596 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:01.123 /dev/nvme0n2 ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.123 rmmod nvme_tcp 00:18:01.123 rmmod nvme_fabrics 00:18:01.123 rmmod nvme_keyring 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 1361558 ']' 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 1361558 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1361558 ']' 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1361558 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361558 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361558' 00:18:01.123 killing process with pid 1361558 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1361558 00:18:01.123 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1361558 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.123 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.660 00:18:03.660 real 0m8.341s 00:18:03.660 user 0m14.882s 00:18:03.660 sys 0m2.270s 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.660 ************************************ 00:18:03.660 END TEST nvmf_nvme_cli 00:18:03.660 ************************************ 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.660 ************************************ 00:18:03.660 START TEST nvmf_vfio_user 00:18:03.660 ************************************ 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:03.660 * Looking for test storage... 00:18:03.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.660 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.661 --rc genhtml_branch_coverage=1 00:18:03.661 --rc genhtml_function_coverage=1 00:18:03.661 --rc genhtml_legend=1 00:18:03.661 --rc geninfo_all_blocks=1 00:18:03.661 --rc geninfo_unexecuted_blocks=1 00:18:03.661 00:18:03.661 ' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.661 --rc genhtml_branch_coverage=1 00:18:03.661 --rc genhtml_function_coverage=1 00:18:03.661 --rc genhtml_legend=1 00:18:03.661 --rc geninfo_all_blocks=1 00:18:03.661 --rc geninfo_unexecuted_blocks=1 00:18:03.661 00:18:03.661 ' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.661 --rc genhtml_branch_coverage=1 00:18:03.661 --rc genhtml_function_coverage=1 00:18:03.661 --rc genhtml_legend=1 00:18:03.661 --rc geninfo_all_blocks=1 00:18:03.661 --rc geninfo_unexecuted_blocks=1 00:18:03.661 00:18:03.661 ' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.661 --rc genhtml_branch_coverage=1 00:18:03.661 --rc genhtml_function_coverage=1 00:18:03.661 --rc genhtml_legend=1 00:18:03.661 --rc geninfo_all_blocks=1 00:18:03.661 --rc geninfo_unexecuted_blocks=1 00:18:03.661 00:18:03.661 ' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1362442 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1362442' 00:18:03.661 Process pid: 1362442 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1362442 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1362442 ']' 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.661 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:03.662 [2024-11-02 14:33:55.401372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:03.662 [2024-11-02 14:33:55.401457] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.662 [2024-11-02 14:33:55.459460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.662 [2024-11-02 14:33:55.547991] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.662 [2024-11-02 14:33:55.548070] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.662 [2024-11-02 14:33:55.548098] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.662 [2024-11-02 14:33:55.548109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.662 [2024-11-02 14:33:55.548119] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.662 [2024-11-02 14:33:55.548205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.662 [2024-11-02 14:33:55.548336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.662 [2024-11-02 14:33:55.548365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.662 [2024-11-02 14:33:55.548368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:03.662 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:05.035 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:05.035 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:05.035 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:05.035 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:05.035 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:05.035 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:05.293 Malloc1 00:18:05.551 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:05.809 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:06.066 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:06.324 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:06.324 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:06.324 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:06.582 Malloc2 00:18:06.582 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:06.840 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:07.097 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:07.356 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:07.356 [2024-11-02 14:33:59.287871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:07.356 [2024-11-02 14:33:59.287905] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362867 ] 00:18:07.356 [2024-11-02 14:33:59.318476] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:07.356 [2024-11-02 14:33:59.327657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:07.356 [2024-11-02 14:33:59.327685] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fab793cf000 00:18:07.356 [2024-11-02 14:33:59.328657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.329651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.330656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.331660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.332667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.333675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.334681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.335683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:07.356 [2024-11-02 14:33:59.336691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:07.356 [2024-11-02 14:33:59.336711] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fab780c7000 00:18:07.356 [2024-11-02 14:33:59.337838] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:07.356 [2024-11-02 14:33:59.352928] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:07.356 [2024-11-02 14:33:59.352975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:07.356 [2024-11-02 14:33:59.357821] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:07.356 [2024-11-02 14:33:59.357874] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:07.356 [2024-11-02 14:33:59.357966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:07.356 [2024-11-02 14:33:59.357998] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:07.356 [2024-11-02 14:33:59.358009] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:07.356 [2024-11-02 14:33:59.358817] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:07.356 [2024-11-02 14:33:59.358837] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:07.356 [2024-11-02 14:33:59.358849] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:07.356 [2024-11-02 14:33:59.359816] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:07.356 [2024-11-02 14:33:59.359834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:07.356 [2024-11-02 14:33:59.359847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.363268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:07.356 [2024-11-02 14:33:59.363288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.363837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:07.356 [2024-11-02 14:33:59.363855] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:07.356 [2024-11-02 14:33:59.363863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.363879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.363989] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:07.356 [2024-11-02 14:33:59.363996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.364005] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:07.356 [2024-11-02 14:33:59.364848] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:07.356 [2024-11-02 14:33:59.365849] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:07.356 [2024-11-02 14:33:59.366853] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:07.356 [2024-11-02 14:33:59.367850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.356 [2024-11-02 14:33:59.367978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:07.356 [2024-11-02 14:33:59.368866] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:07.357 [2024-11-02 14:33:59.368883] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:07.357 [2024-11-02 14:33:59.368892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.368915] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:07.357 [2024-11-02 14:33:59.368933] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.368962] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.357 [2024-11-02 14:33:59.368972] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.357 [2024-11-02 14:33:59.368978] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.357 [2024-11-02 14:33:59.368998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369080] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:07.357 [2024-11-02 14:33:59.369088] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:07.357 [2024-11-02 14:33:59.369095] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:07.357 [2024-11-02 14:33:59.369102] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:07.357 [2024-11-02 14:33:59.369109] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:07.357 [2024-11-02 14:33:59.369116] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:07.357 [2024-11-02 14:33:59.369124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.357 [2024-11-02 14:33:59.369206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.357 [2024-11-02 14:33:59.369218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.357 [2024-11-02 14:33:59.369229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.357 [2024-11-02 14:33:59.369238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369314] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:07.357 [2024-11-02 14:33:59.369322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369333] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369478] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:07.357 [2024-11-02 14:33:59.369486] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:07.357 [2024-11-02 14:33:59.369492] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.357 [2024-11-02 14:33:59.369502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369540] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:07.357 [2024-11-02 14:33:59.369559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369603] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.357 [2024-11-02 14:33:59.369611] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.357 [2024-11-02 14:33:59.369616] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.357 [2024-11-02 14:33:59.369625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369709] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.357 [2024-11-02 14:33:59.369716] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.357 [2024-11-02 14:33:59.369722] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.357 [2024-11-02 14:33:59.369731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369820] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:07.357 [2024-11-02 14:33:59.369827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:07.357 [2024-11-02 14:33:59.369835] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:07.357 [2024-11-02 14:33:59.369860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:07.357 [2024-11-02 14:33:59.369961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:07.357 [2024-11-02 14:33:59.369982] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:07.357 [2024-11-02 14:33:59.369992] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:07.357 [2024-11-02 14:33:59.369998] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:07.357 [2024-11-02 14:33:59.370004] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:07.357 [2024-11-02 14:33:59.370009] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:07.357 [2024-11-02 14:33:59.370018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:07.357 [2024-11-02 14:33:59.370029] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:07.357 [2024-11-02 14:33:59.370037] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:07.357 [2024-11-02 14:33:59.370042] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.357 [2024-11-02 14:33:59.370051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:07.358 [2024-11-02 14:33:59.370061] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:07.358 [2024-11-02 14:33:59.370068] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.358 [2024-11-02 14:33:59.370074] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.358 [2024-11-02 14:33:59.370082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.358 [2024-11-02 14:33:59.370093] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:07.358 [2024-11-02 14:33:59.370101] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:07.358 [2024-11-02 14:33:59.370106] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.358 [2024-11-02 14:33:59.370115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:07.358 [2024-11-02 14:33:59.370125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:07.358 [2024-11-02 14:33:59.370144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:07.358 [2024-11-02 14:33:59.370162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:07.358 [2024-11-02 14:33:59.370173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:07.358 ===================================================== 00:18:07.358 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:07.358 ===================================================== 00:18:07.358 Controller Capabilities/Features 00:18:07.358 ================================ 00:18:07.358 Vendor ID: 4e58 00:18:07.358 Subsystem Vendor ID: 4e58 00:18:07.358 Serial Number: SPDK1 00:18:07.358 Model Number: SPDK bdev Controller 00:18:07.358 Firmware Version: 24.09.1 00:18:07.358 Recommended Arb Burst: 6 00:18:07.358 IEEE OUI Identifier: 8d 6b 50 00:18:07.358 Multi-path I/O 00:18:07.358 May have multiple subsystem ports: Yes 00:18:07.358 May have multiple controllers: Yes 00:18:07.358 Associated with SR-IOV VF: No 00:18:07.358 Max Data Transfer Size: 131072 00:18:07.358 Max Number of Namespaces: 32 00:18:07.358 Max Number of I/O Queues: 127 00:18:07.358 NVMe Specification Version (VS): 1.3 00:18:07.358 NVMe Specification Version (Identify): 1.3 00:18:07.358 Maximum Queue Entries: 256 00:18:07.358 Contiguous Queues Required: Yes 00:18:07.358 Arbitration Mechanisms Supported 00:18:07.358 Weighted Round Robin: Not Supported 00:18:07.358 Vendor Specific: Not Supported 00:18:07.358 Reset Timeout: 15000 ms 00:18:07.358 Doorbell Stride: 4 bytes 00:18:07.358 NVM Subsystem Reset: Not Supported 00:18:07.358 Command Sets Supported 00:18:07.358 NVM Command Set: Supported 00:18:07.358 Boot Partition: Not Supported 00:18:07.358 Memory Page Size Minimum: 4096 bytes 00:18:07.358 Memory Page Size Maximum: 4096 bytes 00:18:07.358 Persistent Memory Region: Not Supported 00:18:07.358 Optional Asynchronous Events Supported 00:18:07.358 Namespace Attribute Notices: Supported 00:18:07.358 Firmware Activation Notices: Not Supported 00:18:07.358 ANA Change Notices: Not Supported 00:18:07.358 PLE Aggregate Log Change Notices: Not Supported 00:18:07.358 LBA Status Info Alert Notices: Not Supported 00:18:07.358 EGE Aggregate Log Change Notices: Not Supported 00:18:07.358 Normal NVM Subsystem Shutdown event: Not Supported 00:18:07.358 Zone Descriptor Change Notices: Not Supported 00:18:07.358 Discovery Log Change Notices: Not Supported 00:18:07.358 Controller Attributes 00:18:07.358 128-bit Host Identifier: Supported 00:18:07.358 Non-Operational Permissive Mode: Not Supported 00:18:07.358 NVM Sets: Not Supported 00:18:07.358 Read Recovery Levels: Not Supported 00:18:07.358 Endurance Groups: Not Supported 00:18:07.358 Predictable Latency Mode: Not Supported 00:18:07.358 Traffic Based Keep ALive: Not Supported 00:18:07.358 Namespace Granularity: Not Supported 00:18:07.358 SQ Associations: Not Supported 00:18:07.358 UUID List: Not Supported 00:18:07.358 Multi-Domain Subsystem: Not Supported 00:18:07.358 Fixed Capacity Management: Not Supported 00:18:07.358 Variable Capacity Management: Not Supported 00:18:07.358 Delete Endurance Group: Not Supported 00:18:07.358 Delete NVM Set: Not Supported 00:18:07.358 Extended LBA Formats Supported: Not Supported 00:18:07.358 Flexible Data Placement Supported: Not Supported 00:18:07.358 00:18:07.358 Controller Memory Buffer Support 00:18:07.358 ================================ 00:18:07.358 Supported: No 00:18:07.358 00:18:07.358 Persistent Memory Region Support 00:18:07.358 ================================ 00:18:07.358 Supported: No 00:18:07.358 00:18:07.358 Admin Command Set Attributes 00:18:07.358 ============================ 00:18:07.358 Security Send/Receive: Not Supported 00:18:07.358 Format NVM: Not Supported 00:18:07.358 Firmware Activate/Download: Not Supported 00:18:07.358 Namespace Management: Not Supported 00:18:07.358 Device Self-Test: Not Supported 00:18:07.358 Directives: Not Supported 00:18:07.358 NVMe-MI: Not Supported 00:18:07.358 Virtualization Management: Not Supported 00:18:07.358 Doorbell Buffer Config: Not Supported 00:18:07.358 Get LBA Status Capability: Not Supported 00:18:07.358 Command & Feature Lockdown Capability: Not Supported 00:18:07.358 Abort Command Limit: 4 00:18:07.358 Async Event Request Limit: 4 00:18:07.358 Number of Firmware Slots: N/A 00:18:07.358 Firmware Slot 1 Read-Only: N/A 00:18:07.358 Firmware Activation Without Reset: N/A 00:18:07.358 Multiple Update Detection Support: N/A 00:18:07.358 Firmware Update Granularity: No Information Provided 00:18:07.358 Per-Namespace SMART Log: No 00:18:07.358 Asymmetric Namespace Access Log Page: Not Supported 00:18:07.358 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:07.358 Command Effects Log Page: Supported 00:18:07.358 Get Log Page Extended Data: Supported 00:18:07.358 Telemetry Log Pages: Not Supported 00:18:07.358 Persistent Event Log Pages: Not Supported 00:18:07.358 Supported Log Pages Log Page: May Support 00:18:07.358 Commands Supported & Effects Log Page: Not Supported 00:18:07.358 Feature Identifiers & Effects Log Page:May Support 00:18:07.358 NVMe-MI Commands & Effects Log Page: May Support 00:18:07.358 Data Area 4 for Telemetry Log: Not Supported 00:18:07.358 Error Log Page Entries Supported: 128 00:18:07.358 Keep Alive: Supported 00:18:07.358 Keep Alive Granularity: 10000 ms 00:18:07.358 00:18:07.358 NVM Command Set Attributes 00:18:07.358 ========================== 00:18:07.358 Submission Queue Entry Size 00:18:07.358 Max: 64 00:18:07.358 Min: 64 00:18:07.358 Completion Queue Entry Size 00:18:07.358 Max: 16 00:18:07.358 Min: 16 00:18:07.358 Number of Namespaces: 32 00:18:07.358 Compare Command: Supported 00:18:07.358 Write Uncorrectable Command: Not Supported 00:18:07.358 Dataset Management Command: Supported 00:18:07.358 Write Zeroes Command: Supported 00:18:07.358 Set Features Save Field: Not Supported 00:18:07.358 Reservations: Not Supported 00:18:07.358 Timestamp: Not Supported 00:18:07.358 Copy: Supported 00:18:07.358 Volatile Write Cache: Present 00:18:07.358 Atomic Write Unit (Normal): 1 00:18:07.358 Atomic Write Unit (PFail): 1 00:18:07.358 Atomic Compare & Write Unit: 1 00:18:07.358 Fused Compare & Write: Supported 00:18:07.358 Scatter-Gather List 00:18:07.358 SGL Command Set: Supported (Dword aligned) 00:18:07.358 SGL Keyed: Not Supported 00:18:07.358 SGL Bit Bucket Descriptor: Not Supported 00:18:07.358 SGL Metadata Pointer: Not Supported 00:18:07.358 Oversized SGL: Not Supported 00:18:07.358 SGL Metadata Address: Not Supported 00:18:07.358 SGL Offset: Not Supported 00:18:07.358 Transport SGL Data Block: Not Supported 00:18:07.358 Replay Protected Memory Block: Not Supported 00:18:07.358 00:18:07.358 Firmware Slot Information 00:18:07.358 ========================= 00:18:07.358 Active slot: 1 00:18:07.358 Slot 1 Firmware Revision: 24.09.1 00:18:07.358 00:18:07.358 00:18:07.358 Commands Supported and Effects 00:18:07.358 ============================== 00:18:07.358 Admin Commands 00:18:07.358 -------------- 00:18:07.358 Get Log Page (02h): Supported 00:18:07.358 Identify (06h): Supported 00:18:07.358 Abort (08h): Supported 00:18:07.358 Set Features (09h): Supported 00:18:07.358 Get Features (0Ah): Supported 00:18:07.358 Asynchronous Event Request (0Ch): Supported 00:18:07.358 Keep Alive (18h): Supported 00:18:07.358 I/O Commands 00:18:07.358 ------------ 00:18:07.358 Flush (00h): Supported LBA-Change 00:18:07.358 Write (01h): Supported LBA-Change 00:18:07.358 Read (02h): Supported 00:18:07.358 Compare (05h): Supported 00:18:07.358 Write Zeroes (08h): Supported LBA-Change 00:18:07.358 Dataset Management (09h): Supported LBA-Change 00:18:07.358 Copy (19h): Supported LBA-Change 00:18:07.358 00:18:07.358 Error Log 00:18:07.358 ========= 00:18:07.359 00:18:07.359 Arbitration 00:18:07.359 =========== 00:18:07.359 Arbitration Burst: 1 00:18:07.359 00:18:07.359 Power Management 00:18:07.359 ================ 00:18:07.359 Number of Power States: 1 00:18:07.359 Current Power State: Power State #0 00:18:07.359 Power State #0: 00:18:07.359 Max Power: 0.00 W 00:18:07.359 Non-Operational State: Operational 00:18:07.359 Entry Latency: Not Reported 00:18:07.359 Exit Latency: Not Reported 00:18:07.359 Relative Read Throughput: 0 00:18:07.359 Relative Read Latency: 0 00:18:07.359 Relative Write Throughput: 0 00:18:07.359 Relative Write Latency: 0 00:18:07.359 Idle Power: Not Reported 00:18:07.359 Active Power: Not Reported 00:18:07.359 Non-Operational Permissive Mode: Not Supported 00:18:07.359 00:18:07.359 Health Information 00:18:07.359 ================== 00:18:07.359 Critical Warnings: 00:18:07.359 Available Spare Space: OK 00:18:07.359 Temperature: OK 00:18:07.359 Device Reliability: OK 00:18:07.359 Read Only: No 00:18:07.359 Volatile Memory Backup: OK 00:18:07.359 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:07.359 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:07.359 Available Spare: 0% 00:18:07.359 Availabl[2024-11-02 14:33:59.370327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:07.359 [2024-11-02 14:33:59.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:07.359 [2024-11-02 14:33:59.370392] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:07.359 [2024-11-02 14:33:59.370410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.359 [2024-11-02 14:33:59.370422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.359 [2024-11-02 14:33:59.370432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.359 [2024-11-02 14:33:59.370442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.359 [2024-11-02 14:33:59.370876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:07.359 [2024-11-02 14:33:59.370896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:07.359 [2024-11-02 14:33:59.371877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.359 [2024-11-02 14:33:59.371949] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:07.359 [2024-11-02 14:33:59.371962] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:07.359 [2024-11-02 14:33:59.372888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:07.359 [2024-11-02 14:33:59.372910] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:07.359 [2024-11-02 14:33:59.372972] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:07.359 [2024-11-02 14:33:59.376268] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:07.617 e Spare Threshold: 0% 00:18:07.617 Life Percentage Used: 0% 00:18:07.617 Data Units Read: 0 00:18:07.617 Data Units Written: 0 00:18:07.617 Host Read Commands: 0 00:18:07.617 Host Write Commands: 0 00:18:07.617 Controller Busy Time: 0 minutes 00:18:07.617 Power Cycles: 0 00:18:07.617 Power On Hours: 0 hours 00:18:07.617 Unsafe Shutdowns: 0 00:18:07.617 Unrecoverable Media Errors: 0 00:18:07.617 Lifetime Error Log Entries: 0 00:18:07.617 Warning Temperature Time: 0 minutes 00:18:07.617 Critical Temperature Time: 0 minutes 00:18:07.617 00:18:07.617 Number of Queues 00:18:07.617 ================ 00:18:07.617 Number of I/O Submission Queues: 127 00:18:07.617 Number of I/O Completion Queues: 127 00:18:07.617 00:18:07.617 Active Namespaces 00:18:07.617 ================= 00:18:07.617 Namespace ID:1 00:18:07.617 Error Recovery Timeout: Unlimited 00:18:07.617 Command Set Identifier: NVM (00h) 00:18:07.617 Deallocate: Supported 00:18:07.617 Deallocated/Unwritten Error: Not Supported 00:18:07.617 Deallocated Read Value: Unknown 00:18:07.617 Deallocate in Write Zeroes: Not Supported 00:18:07.617 Deallocated Guard Field: 0xFFFF 00:18:07.617 Flush: Supported 00:18:07.617 Reservation: Supported 00:18:07.617 Namespace Sharing Capabilities: Multiple Controllers 00:18:07.617 Size (in LBAs): 131072 (0GiB) 00:18:07.617 Capacity (in LBAs): 131072 (0GiB) 00:18:07.617 Utilization (in LBAs): 131072 (0GiB) 00:18:07.617 NGUID: 27F437B4EE064C188FA397D6910105B3 00:18:07.617 UUID: 27f437b4-ee06-4c18-8fa3-97d6910105b3 00:18:07.617 Thin Provisioning: Not Supported 00:18:07.617 Per-NS Atomic Units: Yes 00:18:07.617 Atomic Boundary Size (Normal): 0 00:18:07.617 Atomic Boundary Size (PFail): 0 00:18:07.617 Atomic Boundary Offset: 0 00:18:07.617 Maximum Single Source Range Length: 65535 00:18:07.617 Maximum Copy Length: 65535 00:18:07.617 Maximum Source Range Count: 1 00:18:07.617 NGUID/EUI64 Never Reused: No 00:18:07.617 Namespace Write Protected: No 00:18:07.617 Number of LBA Formats: 1 00:18:07.617 Current LBA Format: LBA Format #00 00:18:07.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:07.617 00:18:07.617 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:07.617 [2024-11-02 14:33:59.608128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:12.879 Initializing NVMe Controllers 00:18:12.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:12.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:12.879 Initialization complete. Launching workers. 00:18:12.879 ======================================================== 00:18:12.879 Latency(us) 00:18:12.879 Device Information : IOPS MiB/s Average min max 00:18:12.879 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33145.96 129.48 3863.33 1188.91 11527.11 00:18:12.879 ======================================================== 00:18:12.879 Total : 33145.96 129.48 3863.33 1188.91 11527.11 00:18:12.879 00:18:12.879 [2024-11-02 14:34:04.629657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:12.879 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:12.879 [2024-11-02 14:34:04.875843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.145 Initializing NVMe Controllers 00:18:18.145 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.145 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:18.145 Initialization complete. Launching workers. 00:18:18.145 ======================================================== 00:18:18.145 Latency(us) 00:18:18.145 Device Information : IOPS MiB/s Average min max 00:18:18.145 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16005.98 62.52 8006.96 5995.25 15986.01 00:18:18.145 ======================================================== 00:18:18.145 Total : 16005.98 62.52 8006.96 5995.25 15986.01 00:18:18.145 00:18:18.145 [2024-11-02 14:34:09.913270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.145 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:18.145 [2024-11-02 14:34:10.121360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.463 [2024-11-02 14:34:15.191678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.463 Initializing NVMe Controllers 00:18:23.463 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.463 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:23.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:23.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:23.463 Initialization complete. Launching workers. 00:18:23.463 Starting thread on core 2 00:18:23.463 Starting thread on core 3 00:18:23.463 Starting thread on core 1 00:18:23.463 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:23.463 [2024-11-02 14:34:15.498749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:27.691 [2024-11-02 14:34:19.281505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:27.691 Initializing NVMe Controllers 00:18:27.691 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.691 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:27.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:27.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:27.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:27.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:27.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:27.692 Initialization complete. Launching workers. 00:18:27.692 Starting thread on core 1 with urgent priority queue 00:18:27.692 Starting thread on core 2 with urgent priority queue 00:18:27.692 Starting thread on core 3 with urgent priority queue 00:18:27.692 Starting thread on core 0 with urgent priority queue 00:18:27.692 SPDK bdev Controller (SPDK1 ) core 0: 375.00 IO/s 266.67 secs/100000 ios 00:18:27.692 SPDK bdev Controller (SPDK1 ) core 1: 537.00 IO/s 186.22 secs/100000 ios 00:18:27.692 SPDK bdev Controller (SPDK1 ) core 2: 513.67 IO/s 194.68 secs/100000 ios 00:18:27.692 SPDK bdev Controller (SPDK1 ) core 3: 401.33 IO/s 249.17 secs/100000 ios 00:18:27.692 ======================================================== 00:18:27.692 00:18:27.692 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:27.692 [2024-11-02 14:34:19.573790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:27.692 Initializing NVMe Controllers 00:18:27.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.692 Namespace ID: 1 size: 0GB 00:18:27.692 Initialization complete. 00:18:27.692 INFO: using host memory buffer for IO 00:18:27.692 Hello world! 00:18:27.692 [2024-11-02 14:34:19.608371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:27.692 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:27.951 [2024-11-02 14:34:19.896821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.891 Initializing NVMe Controllers 00:18:28.891 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.891 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.891 Initialization complete. Launching workers. 00:18:28.891 submit (in ns) avg, min, max = 10320.3, 3517.8, 4017155.6 00:18:28.891 complete (in ns) avg, min, max = 25497.6, 2065.6, 4022654.4 00:18:28.891 00:18:28.891 Submit histogram 00:18:28.891 ================ 00:18:28.892 Range in us Cumulative Count 00:18:28.892 3.508 - 3.532: 0.1333% ( 17) 00:18:28.892 3.532 - 3.556: 0.5803% ( 57) 00:18:28.892 3.556 - 3.579: 2.0231% ( 184) 00:18:28.892 3.579 - 3.603: 4.8381% ( 359) 00:18:28.892 3.603 - 3.627: 10.7975% ( 760) 00:18:28.892 3.627 - 3.650: 18.2153% ( 946) 00:18:28.892 3.650 - 3.674: 25.6567% ( 949) 00:18:28.892 3.674 - 3.698: 32.3453% ( 853) 00:18:28.892 3.698 - 3.721: 38.4929% ( 784) 00:18:28.892 3.721 - 3.745: 43.4486% ( 632) 00:18:28.892 3.745 - 3.769: 47.6907% ( 541) 00:18:28.892 3.769 - 3.793: 52.0270% ( 553) 00:18:28.892 3.793 - 3.816: 56.0731% ( 516) 00:18:28.892 3.816 - 3.840: 59.9624% ( 496) 00:18:28.892 3.840 - 3.864: 64.5025% ( 579) 00:18:28.892 3.864 - 3.887: 69.4817% ( 635) 00:18:28.892 3.887 - 3.911: 73.9826% ( 574) 00:18:28.892 3.911 - 3.935: 77.8876% ( 498) 00:18:28.892 3.935 - 3.959: 80.8359% ( 376) 00:18:28.892 3.959 - 3.982: 83.1255% ( 292) 00:18:28.892 3.982 - 4.006: 85.0859% ( 250) 00:18:28.892 4.006 - 4.030: 86.8658% ( 227) 00:18:28.892 4.030 - 4.053: 88.2773% ( 180) 00:18:28.892 4.053 - 4.077: 89.4064% ( 144) 00:18:28.892 4.077 - 4.101: 90.5199% ( 142) 00:18:28.892 4.101 - 4.124: 91.5863% ( 136) 00:18:28.892 4.124 - 4.148: 92.6841% ( 140) 00:18:28.892 4.148 - 4.172: 93.5231% ( 107) 00:18:28.892 4.172 - 4.196: 93.9857% ( 59) 00:18:28.892 4.196 - 4.219: 94.3778% ( 50) 00:18:28.892 4.219 - 4.243: 94.7228% ( 44) 00:18:28.892 4.243 - 4.267: 95.0286% ( 39) 00:18:28.892 4.267 - 4.290: 95.3972% ( 47) 00:18:28.892 4.290 - 4.314: 95.5618% ( 21) 00:18:28.892 4.314 - 4.338: 95.7108% ( 19) 00:18:28.892 4.338 - 4.361: 95.8676% ( 20) 00:18:28.892 4.361 - 4.385: 95.9617% ( 12) 00:18:28.892 4.385 - 4.409: 96.0794% ( 15) 00:18:28.892 4.409 - 4.433: 96.1656% ( 11) 00:18:28.892 4.433 - 4.456: 96.2362% ( 9) 00:18:28.892 4.456 - 4.480: 96.2911% ( 7) 00:18:28.892 4.480 - 4.504: 96.3381% ( 6) 00:18:28.892 4.504 - 4.527: 96.4244% ( 11) 00:18:28.892 4.527 - 4.551: 96.4871% ( 8) 00:18:28.892 4.551 - 4.575: 96.5263% ( 5) 00:18:28.892 4.575 - 4.599: 96.5734% ( 6) 00:18:28.892 4.599 - 4.622: 96.5812% ( 1) 00:18:28.892 4.622 - 4.646: 96.5969% ( 2) 00:18:28.892 4.646 - 4.670: 96.6361% ( 5) 00:18:28.892 4.670 - 4.693: 96.6439% ( 1) 00:18:28.892 4.717 - 4.741: 96.6675% ( 3) 00:18:28.892 4.741 - 4.764: 96.6831% ( 2) 00:18:28.892 4.764 - 4.788: 96.7067% ( 3) 00:18:28.892 4.788 - 4.812: 96.7380% ( 4) 00:18:28.892 4.812 - 4.836: 96.7694% ( 4) 00:18:28.892 4.836 - 4.859: 96.8243% ( 7) 00:18:28.892 4.859 - 4.883: 96.8713% ( 6) 00:18:28.892 4.883 - 4.907: 96.9341% ( 8) 00:18:28.892 4.907 - 4.930: 96.9733% ( 5) 00:18:28.892 4.930 - 4.954: 97.0438% ( 9) 00:18:28.892 4.954 - 4.978: 97.1144% ( 9) 00:18:28.892 4.978 - 5.001: 97.1536% ( 5) 00:18:28.892 5.001 - 5.025: 97.1693% ( 2) 00:18:28.892 5.025 - 5.049: 97.2007% ( 4) 00:18:28.892 5.049 - 5.073: 97.2242% ( 3) 00:18:28.892 5.073 - 5.096: 97.2477% ( 3) 00:18:28.892 5.096 - 5.120: 97.3104% ( 8) 00:18:28.892 5.120 - 5.144: 97.3575% ( 6) 00:18:28.892 5.144 - 5.167: 97.4124% ( 7) 00:18:28.892 5.167 - 5.191: 97.4516% ( 5) 00:18:28.892 5.191 - 5.215: 97.4829% ( 4) 00:18:28.892 5.215 - 5.239: 97.5065% ( 3) 00:18:28.892 5.239 - 5.262: 97.5535% ( 6) 00:18:28.892 5.262 - 5.286: 97.5849% ( 4) 00:18:28.892 5.286 - 5.310: 97.5927% ( 1) 00:18:28.892 5.310 - 5.333: 97.6398% ( 6) 00:18:28.892 5.333 - 5.357: 97.6633% ( 3) 00:18:28.892 5.357 - 5.381: 97.6711% ( 1) 00:18:28.892 5.381 - 5.404: 97.6868% ( 2) 00:18:28.892 5.404 - 5.428: 97.7260% ( 5) 00:18:28.892 5.428 - 5.452: 97.7417% ( 2) 00:18:28.892 5.452 - 5.476: 97.7495% ( 1) 00:18:28.892 5.476 - 5.499: 97.7574% ( 1) 00:18:28.892 5.523 - 5.547: 97.7731% ( 2) 00:18:28.892 5.547 - 5.570: 97.7809% ( 1) 00:18:28.892 5.594 - 5.618: 97.7888% ( 1) 00:18:28.892 5.618 - 5.641: 97.8044% ( 2) 00:18:28.892 5.641 - 5.665: 97.8123% ( 1) 00:18:28.892 5.689 - 5.713: 97.8358% ( 3) 00:18:28.892 5.831 - 5.855: 97.8436% ( 1) 00:18:28.892 5.926 - 5.950: 97.8515% ( 1) 00:18:28.892 5.973 - 5.997: 97.8593% ( 1) 00:18:28.892 6.021 - 6.044: 97.8672% ( 1) 00:18:28.892 6.400 - 6.447: 97.8829% ( 2) 00:18:28.892 6.495 - 6.542: 97.8907% ( 1) 00:18:28.892 6.637 - 6.684: 97.8985% ( 1) 00:18:28.892 6.684 - 6.732: 97.9064% ( 1) 00:18:28.892 6.779 - 6.827: 97.9142% ( 1) 00:18:28.892 7.064 - 7.111: 97.9221% ( 1) 00:18:28.892 7.111 - 7.159: 97.9299% ( 1) 00:18:28.892 7.253 - 7.301: 97.9377% ( 1) 00:18:28.892 7.348 - 7.396: 97.9456% ( 1) 00:18:28.892 7.490 - 7.538: 97.9534% ( 1) 00:18:28.892 7.585 - 7.633: 97.9613% ( 1) 00:18:28.892 7.633 - 7.680: 97.9691% ( 1) 00:18:28.892 7.680 - 7.727: 97.9769% ( 1) 00:18:28.892 7.964 - 8.012: 97.9926% ( 2) 00:18:28.892 8.012 - 8.059: 98.0005% ( 1) 00:18:28.892 8.107 - 8.154: 98.0083% ( 1) 00:18:28.892 8.201 - 8.249: 98.0240% ( 2) 00:18:28.892 8.249 - 8.296: 98.0318% ( 1) 00:18:28.892 8.533 - 8.581: 98.0475% ( 2) 00:18:28.892 8.628 - 8.676: 98.0632% ( 2) 00:18:28.892 8.676 - 8.723: 98.0710% ( 1) 00:18:28.892 8.723 - 8.770: 98.1024% ( 4) 00:18:28.892 8.770 - 8.818: 98.1181% ( 2) 00:18:28.892 8.818 - 8.865: 98.1416% ( 3) 00:18:28.892 8.865 - 8.913: 98.1573% ( 2) 00:18:28.892 8.960 - 9.007: 98.1651% ( 1) 00:18:28.892 9.055 - 9.102: 98.1730% ( 1) 00:18:28.892 9.197 - 9.244: 98.1808% ( 1) 00:18:28.892 9.244 - 9.292: 98.1887% ( 1) 00:18:28.892 9.292 - 9.339: 98.2043% ( 2) 00:18:28.892 9.339 - 9.387: 98.2122% ( 1) 00:18:28.892 9.387 - 9.434: 98.2200% ( 1) 00:18:28.892 9.434 - 9.481: 98.2279% ( 1) 00:18:28.892 9.671 - 9.719: 98.2357% ( 1) 00:18:28.892 9.719 - 9.766: 98.2514% ( 2) 00:18:28.892 9.766 - 9.813: 98.2592% ( 1) 00:18:28.892 9.861 - 9.908: 98.2671% ( 1) 00:18:28.892 9.908 - 9.956: 98.2749% ( 1) 00:18:28.892 9.956 - 10.003: 98.2828% ( 1) 00:18:28.892 10.003 - 10.050: 98.2984% ( 2) 00:18:28.892 10.098 - 10.145: 98.3063% ( 1) 00:18:28.892 10.193 - 10.240: 98.3141% ( 1) 00:18:28.892 10.287 - 10.335: 98.3298% ( 2) 00:18:28.892 10.382 - 10.430: 98.3376% ( 1) 00:18:28.892 10.572 - 10.619: 98.3455% ( 1) 00:18:28.892 10.714 - 10.761: 98.3533% ( 1) 00:18:28.892 10.761 - 10.809: 98.3612% ( 1) 00:18:28.892 10.856 - 10.904: 98.3690% ( 1) 00:18:28.892 10.904 - 10.951: 98.3769% ( 1) 00:18:28.892 10.999 - 11.046: 98.3847% ( 1) 00:18:28.892 11.046 - 11.093: 98.3925% ( 1) 00:18:28.892 11.188 - 11.236: 98.4082% ( 2) 00:18:28.892 11.283 - 11.330: 98.4161% ( 1) 00:18:28.892 11.520 - 11.567: 98.4317% ( 2) 00:18:28.892 11.567 - 11.615: 98.4396% ( 1) 00:18:28.892 11.662 - 11.710: 98.4474% ( 1) 00:18:28.892 11.710 - 11.757: 98.4631% ( 2) 00:18:28.892 11.804 - 11.852: 98.4788% ( 2) 00:18:28.892 11.852 - 11.899: 98.4866% ( 1) 00:18:28.892 11.994 - 12.041: 98.4945% ( 1) 00:18:28.892 12.041 - 12.089: 98.5023% ( 1) 00:18:28.892 12.089 - 12.136: 98.5102% ( 1) 00:18:28.892 12.136 - 12.231: 98.5180% ( 1) 00:18:28.892 12.231 - 12.326: 98.5337% ( 2) 00:18:28.892 12.705 - 12.800: 98.5415% ( 1) 00:18:28.892 12.990 - 13.084: 98.5494% ( 1) 00:18:28.892 13.559 - 13.653: 98.5650% ( 2) 00:18:28.892 13.653 - 13.748: 98.5729% ( 1) 00:18:28.892 13.843 - 13.938: 98.5807% ( 1) 00:18:28.892 13.938 - 14.033: 98.5886% ( 1) 00:18:28.892 14.127 - 14.222: 98.5964% ( 1) 00:18:28.892 14.222 - 14.317: 98.6121% ( 2) 00:18:28.892 14.317 - 14.412: 98.6199% ( 1) 00:18:28.892 14.412 - 14.507: 98.6435% ( 3) 00:18:28.892 14.507 - 14.601: 98.6513% ( 1) 00:18:28.892 14.791 - 14.886: 98.6591% ( 1) 00:18:28.892 15.360 - 15.455: 98.6670% ( 1) 00:18:28.892 16.782 - 16.877: 98.6748% ( 1) 00:18:28.892 17.067 - 17.161: 98.6827% ( 1) 00:18:28.892 17.161 - 17.256: 98.6983% ( 2) 00:18:28.892 17.351 - 17.446: 98.7219% ( 3) 00:18:28.892 17.446 - 17.541: 98.7846% ( 8) 00:18:28.892 17.541 - 17.636: 98.7924% ( 1) 00:18:28.892 17.636 - 17.730: 98.8316% ( 5) 00:18:28.892 17.730 - 17.825: 98.9022% ( 9) 00:18:28.892 17.825 - 17.920: 98.9728% ( 9) 00:18:28.892 17.920 - 18.015: 99.0277% ( 7) 00:18:28.892 18.015 - 18.110: 99.0826% ( 7) 00:18:28.892 18.110 - 18.204: 99.1375% ( 7) 00:18:28.892 18.204 - 18.299: 99.2394% ( 13) 00:18:28.892 18.299 - 18.394: 99.3256% ( 11) 00:18:28.892 18.394 - 18.489: 99.3962% ( 9) 00:18:28.892 18.489 - 18.584: 99.4746% ( 10) 00:18:28.892 18.584 - 18.679: 99.5687% ( 12) 00:18:28.893 18.679 - 18.773: 99.6001% ( 4) 00:18:28.893 18.773 - 18.868: 99.6236% ( 3) 00:18:28.893 18.868 - 18.963: 99.6707% ( 6) 00:18:28.893 18.963 - 19.058: 99.7020% ( 4) 00:18:28.893 19.058 - 19.153: 99.7177% ( 2) 00:18:28.893 19.153 - 19.247: 99.7334% ( 2) 00:18:28.893 19.247 - 19.342: 99.7569% ( 3) 00:18:28.893 19.342 - 19.437: 99.7726% ( 2) 00:18:28.893 19.437 - 19.532: 99.7804% ( 1) 00:18:28.893 19.532 - 19.627: 99.7883% ( 1) 00:18:28.893 19.627 - 19.721: 99.7961% ( 1) 00:18:28.893 19.816 - 19.911: 99.8040% ( 1) 00:18:28.893 20.006 - 20.101: 99.8118% ( 1) 00:18:28.893 21.618 - 21.713: 99.8197% ( 1) 00:18:28.893 23.609 - 23.704: 99.8353% ( 2) 00:18:28.893 26.548 - 26.738: 99.8432% ( 1) 00:18:28.893 3980.705 - 4004.978: 99.9530% ( 14) 00:18:28.893 4004.978 - 4029.250: 100.0000% ( 6) 00:18:28.893 00:18:28.893 Complete histogram 00:18:28.893 ================== 00:18:28.893 Range in us Cumulative Count 00:18:28.893 2.062 - 2.074: 0.4783% ( 61) 00:18:28.893 2.074 - 2.086: 17.2116% ( 2134) 00:18:28.893 2.086 - 2.098: 34.9722% ( 2265) 00:18:28.893 2.098 - 2.110: 38.5007% ( 450) 00:18:28.893 2.110 - 2.121: 43.5349% ( 642) 00:18:28.893 2.121 - 2.133: 46.8047% ( 417) 00:18:28.893 2.133 - 2.145: 50.2078% ( 434) 00:18:28.893 2.145 - 2.157: 61.5698% ( 1449) 00:18:28.893 2.157 - 2.169: 67.6233% ( 772) 00:18:28.893 2.169 - 2.181: 69.3876% ( 225) 00:18:28.893 2.181 - 2.193: 71.9439% ( 326) 00:18:28.893 2.193 - 2.204: 73.5200% ( 201) 00:18:28.893 2.204 - 2.216: 74.8451% ( 169) 00:18:28.893 2.216 - 2.228: 80.9770% ( 782) 00:18:28.893 2.228 - 2.240: 84.9369% ( 505) 00:18:28.893 2.240 - 2.252: 87.3520% ( 308) 00:18:28.893 2.252 - 2.264: 89.7906% ( 311) 00:18:28.893 2.264 - 2.276: 91.0374% ( 159) 00:18:28.893 2.276 - 2.287: 91.5785% ( 69) 00:18:28.893 2.287 - 2.299: 92.1038% ( 67) 00:18:28.893 2.299 - 2.311: 92.5037% ( 51) 00:18:28.893 2.311 - 2.323: 93.7897% ( 164) 00:18:28.893 2.323 - 2.335: 94.5033% ( 91) 00:18:28.893 2.335 - 2.347: 94.7307% ( 29) 00:18:28.893 2.347 - 2.359: 94.7542% ( 3) 00:18:28.893 2.359 - 2.370: 94.7777% ( 3) 00:18:28.893 2.370 - 2.382: 94.8718% ( 12) 00:18:28.893 2.382 - 2.394: 95.1070% ( 30) 00:18:28.893 2.394 - 2.406: 95.4677% ( 46) 00:18:28.893 2.406 - 2.418: 95.5932% ( 16) 00:18:28.893 2.418 - 2.430: 95.6873% ( 12) 00:18:28.893 2.430 - 2.441: 95.7579% ( 9) 00:18:28.893 2.441 - 2.453: 95.8441% ( 11) 00:18:28.893 2.453 - 2.465: 95.9853% ( 18) 00:18:28.893 2.465 - 2.477: 96.1264% ( 18) 00:18:28.893 2.477 - 2.489: 96.2911% ( 21) 00:18:28.893 2.489 - 2.501: 96.4087% ( 15) 00:18:28.893 2.501 - 2.513: 96.5498% ( 18) 00:18:28.893 2.513 - 2.524: 96.7223% ( 22) 00:18:28.893 2.524 - 2.536: 96.8713% ( 19) 00:18:28.893 2.536 - 2.548: 97.0046% ( 17) 00:18:28.893 2.548 - 2.560: 97.1615% ( 20) 00:18:28.893 2.560 - 2.572: 97.2634% ( 13) 00:18:28.893 2.572 - 2.584: 97.3967% ( 17) 00:18:28.893 2.584 - 2.596: 97.5143% ( 15) 00:18:28.893 2.596 - 2.607: 97.5927% ( 10) 00:18:28.893 2.607 - 2.619: 97.6790% ( 11) 00:18:28.893 2.619 - 2.631: 97.7260% ( 6) 00:18:28.893 2.631 - 2.643: 97.8201% ( 12) 00:18:28.893 2.643 - 2.655: 97.8436% ( 3) 00:18:28.893 2.655 - 2.667: 97.8593% ( 2) 00:18:28.893 2.667 - 2.679: 97.8750% ( 2) 00:18:28.893 2.679 - 2.690: 97.8985% ( 3) 00:18:28.893 2.690 - 2.702: 97.9142% ( 2) 00:18:28.893 2.702 - 2.714: 97.9299% ( 2) 00:18:28.893 2.714 - 2.726: 97.9377% ( 1) 00:18:28.893 2.726 - 2.738: 97.9456% ( 1) 00:18:28.893 2.738 - 2.750: 97.9534% ( 1) 00:18:28.893 2.750 - 2.761: 97.9769% ( 3) 00:18:28.893 2.773 - 2.785: 97.9848% ( 1) 00:18:28.893 2.785 - 2.797: 98.0005% ( 2) 00:18:28.893 2.797 - 2.809: 98.0083% ( 1) 00:18:28.893 2.809 - 2.821: 98.0240% ( 2) 00:18:28.893 2.833 - 2.844: 98.0318% ( 1) 00:18:28.893 2.844 - 2.856: 98.0475% ( 2) 00:18:28.893 2.927 - 2.939: 98.0632% ( 2) 00:18:28.893 2.939 - 2.951: 98.0710% ( 1) 00:18:28.893 2.951 - 2.963: 98.0789% ( 1) 00:18:28.893 2.963 - 2.975: 98.1024% ( 3) 00:18:28.893 2.999 - 3.010: 98.1181% ( 2) 00:18:28.893 3.034 - 3.058: 98.1259% ( 1) 00:18:28.893 3.081 - 3.105: 98.1416% ( 2) 00:18:28.893 3.105 - 3.129: 98.1573% ( 2) 00:18:28.893 3.129 - 3.153: 98.1730% ( 2) 00:18:28.893 3.200 - 3.224: 98.1808% ( 1) 00:18:28.893 3.224 - 3.247: 98.1965% ( 2) 00:18:28.893 3.295 - 3.319: 98.2043% ( 1) 00:18:28.893 3.319 - 3.342: 98.2122% ( 1) 00:18:28.893 3.390 - 3.413: 98.2357% ( 3) 00:18:28.893 3.413 - 3.437: 98.2592% ( 3) 00:18:28.893 3.461 - 3.484: 98.2906% ( 4) 00:18:28.893 3.508 - 3.532: 98.3141% ( 3) 00:18:28.893 3.532 - 3.556: 98.3220% ( 1) 00:18:28.893 3.603 - 3.627: 98.3298% ( 1) 00:18:28.893 3.627 - 3.650: 98.3376% ( 1) 00:18:28.893 3.650 - 3.674: 98.3612% ( 3) 00:18:28.893 3.698 - 3.721: 98.3769% ( 2) 00:18:28.893 3.721 - 3.745: 98.3925% ( 2) 00:18:28.893 3.769 - 3.793: 98.4004% ( 1) 00:18:28.893 3.793 - 3.816: 98.4082% ( 1) 00:18:28.893 3.816 - 3.840: 98.4239% ( 2) 00:18:28.893 3.840 - 3.864: 98.4396% ( 2) 00:18:28.893 3.864 - 3.887: 98.4474% ( 1) 00:18:28.893 3.887 - 3.911: 98.4553% ( 1) 00:18:28.893 3.982 - 4.006: 98.4631% ( 1) 00:18:28.893 4.053 - 4.077: 98.4788% ( 2) 00:18:28.893 4.148 - 4.172: 98.4866% ( 1) 00:18:28.893 4.172 - 4.196: 98.5023% ( 2) 00:18:28.893 4.219 - 4.243: 98.5102% ( 1) 00:18:28.893 4.883 - 4.907: 98.5180% ( 1) 00:18:28.893 5.736 - 5.760: 98.5258% ( 1) 00:18:28.893 6.353 - 6.400: 98.5337% ( 1) 00:18:28.893 6.400 - 6.447: 98.5415% ( 1) 00:18:28.893 6.590 - 6.637: 98.5494% ( 1) 00:18:28.893 6.637 - 6.684: 98.5650% ( 2) 00:18:28.893 6.732 - 6.779: 98.5729% ( 1) 00:18:28.893 7.206 - 7.253: 98.5807% ( 1) 00:18:28.893 7.348 - 7.396: 98.5886% ( 1) 00:18:28.893 7.490 - 7.538: 98.5964% ( 1) 00:18:28.893 7.538 - 7.585: 98.6042% ( 1) 00:18:28.893 7.727 - 7.775: 98.6121% ( 1) 00:18:28.893 7.870 - 7.917: 98.6199% ( 1) 00:18:28.893 8.012 - 8.059: 98.6278% ( 1) 00:18:28.893 8.296 - 8.344: 98.6356% ( 1) 00:18:28.893 8.628 - 8.676: 98.6435% ( 1) 00:18:28.893 8.818 - 8.865: 98.6513% ( 1) 00:18:28.893 8.913 - 8.960: 98.6591% ( 1) 00:18:28.893 10.240 - 10.287: 98.6670% ( 1) 00:18:28.893 10.904 - 10.951: 98.6748% ( 1) 00:18:28.893 12.895 - 12.990: 98.6827% ( 1) 00:18:28.893 13.559 - 13.653: 98.6905% ( 1) 00:18:28.893 15.550 - 15.644: 98.7140% ( 3) 00:18:28.893 15.739 - 15.834: 98.7219% ( 1) 00:18:28.893 15.834 - 15.929: 98.7297% ( 1) 00:18:28.893 15.929 - 16.024: 98.7532% ( 3) 00:18:28.893 16.024 - 16.119: 98.7924% ( 5) 00:18:28.893 16.119 - 16.213: 98.8160% ( 3) 00:18:28.893 16.213 - 16.308: 98.8630% ( 6) 00:18:28.893 16.308 - 16.403: 98.9179% ( 7) 00:18:28.893 16.403 - 16.498: 98.9728% ( 7) 00:18:28.893 16.498 - 16.593: 99.0512% ( 10) 00:18:28.893 16.593 - 16.687: 99.1139% ( 8) 00:18:28.893 16.687 - 16.782: 99.1688% ( 7) 00:18:28.893 16.782 - 16.877: 99.2237%[2024-11-02 14:34:20.918939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.152 ( 7) 00:18:29.152 16.877 - 16.972: 99.2551% ( 4) 00:18:29.152 16.972 - 17.067: 99.2629% ( 1) 00:18:29.152 17.067 - 17.161: 99.3021% ( 5) 00:18:29.152 17.161 - 17.256: 99.3178% ( 2) 00:18:29.152 17.256 - 17.351: 99.3256% ( 1) 00:18:29.152 17.351 - 17.446: 99.3492% ( 3) 00:18:29.152 17.825 - 17.920: 99.3570% ( 1) 00:18:29.152 17.920 - 18.015: 99.3727% ( 2) 00:18:29.152 18.394 - 18.489: 99.3805% ( 1) 00:18:29.152 18.489 - 18.584: 99.3884% ( 1) 00:18:29.152 18.773 - 18.868: 99.3962% ( 1) 00:18:29.152 18.868 - 18.963: 99.4041% ( 1) 00:18:29.152 22.281 - 22.376: 99.4119% ( 1) 00:18:29.152 23.419 - 23.514: 99.4197% ( 1) 00:18:29.152 3980.705 - 4004.978: 99.8981% ( 61) 00:18:29.152 4004.978 - 4029.250: 100.0000% ( 13) 00:18:29.152 00:18:29.152 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:29.152 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:29.152 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:29.152 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:29.152 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:29.410 [ 00:18:29.410 { 00:18:29.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:29.410 "subtype": "Discovery", 00:18:29.410 "listen_addresses": [], 00:18:29.410 "allow_any_host": true, 00:18:29.410 "hosts": [] 00:18:29.410 }, 00:18:29.410 { 00:18:29.410 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:29.410 "subtype": "NVMe", 00:18:29.410 "listen_addresses": [ 00:18:29.410 { 00:18:29.410 "trtype": "VFIOUSER", 00:18:29.410 "adrfam": "IPv4", 00:18:29.410 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:29.410 "trsvcid": "0" 00:18:29.410 } 00:18:29.410 ], 00:18:29.410 "allow_any_host": true, 00:18:29.410 "hosts": [], 00:18:29.410 "serial_number": "SPDK1", 00:18:29.410 "model_number": "SPDK bdev Controller", 00:18:29.410 "max_namespaces": 32, 00:18:29.410 "min_cntlid": 1, 00:18:29.410 "max_cntlid": 65519, 00:18:29.410 "namespaces": [ 00:18:29.410 { 00:18:29.410 "nsid": 1, 00:18:29.410 "bdev_name": "Malloc1", 00:18:29.410 "name": "Malloc1", 00:18:29.410 "nguid": "27F437B4EE064C188FA397D6910105B3", 00:18:29.410 "uuid": "27f437b4-ee06-4c18-8fa3-97d6910105b3" 00:18:29.410 } 00:18:29.410 ] 00:18:29.410 }, 00:18:29.410 { 00:18:29.410 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:29.410 "subtype": "NVMe", 00:18:29.410 "listen_addresses": [ 00:18:29.410 { 00:18:29.410 "trtype": "VFIOUSER", 00:18:29.410 "adrfam": "IPv4", 00:18:29.410 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:29.410 "trsvcid": "0" 00:18:29.410 } 00:18:29.410 ], 00:18:29.410 "allow_any_host": true, 00:18:29.410 "hosts": [], 00:18:29.410 "serial_number": "SPDK2", 00:18:29.410 "model_number": "SPDK bdev Controller", 00:18:29.410 "max_namespaces": 32, 00:18:29.410 "min_cntlid": 1, 00:18:29.410 "max_cntlid": 65519, 00:18:29.410 "namespaces": [ 00:18:29.410 { 00:18:29.410 "nsid": 1, 00:18:29.410 "bdev_name": "Malloc2", 00:18:29.410 "name": "Malloc2", 00:18:29.410 "nguid": "AF2BEA525AC14D9C8009018CBE5FCBA1", 00:18:29.410 "uuid": "af2bea52-5ac1-4d9c-8009-018cbe5fcba1" 00:18:29.410 } 00:18:29.410 ] 00:18:29.410 } 00:18:29.410 ] 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1365500 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:29.410 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:29.410 [2024-11-02 14:34:21.413771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.669 Malloc3 00:18:29.669 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:29.926 [2024-11-02 14:34:21.807666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.926 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:29.926 Asynchronous Event Request test 00:18:29.926 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.926 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.926 Registering asynchronous event callbacks... 00:18:29.926 Starting namespace attribute notice tests for all controllers... 00:18:29.926 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:29.926 aer_cb - Changed Namespace 00:18:29.926 Cleaning up... 00:18:30.186 [ 00:18:30.186 { 00:18:30.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.186 "subtype": "Discovery", 00:18:30.186 "listen_addresses": [], 00:18:30.186 "allow_any_host": true, 00:18:30.186 "hosts": [] 00:18:30.186 }, 00:18:30.186 { 00:18:30.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.186 "subtype": "NVMe", 00:18:30.186 "listen_addresses": [ 00:18:30.186 { 00:18:30.186 "trtype": "VFIOUSER", 00:18:30.186 "adrfam": "IPv4", 00:18:30.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.186 "trsvcid": "0" 00:18:30.186 } 00:18:30.186 ], 00:18:30.186 "allow_any_host": true, 00:18:30.186 "hosts": [], 00:18:30.186 "serial_number": "SPDK1", 00:18:30.186 "model_number": "SPDK bdev Controller", 00:18:30.186 "max_namespaces": 32, 00:18:30.186 "min_cntlid": 1, 00:18:30.186 "max_cntlid": 65519, 00:18:30.186 "namespaces": [ 00:18:30.186 { 00:18:30.186 "nsid": 1, 00:18:30.186 "bdev_name": "Malloc1", 00:18:30.186 "name": "Malloc1", 00:18:30.186 "nguid": "27F437B4EE064C188FA397D6910105B3", 00:18:30.186 "uuid": "27f437b4-ee06-4c18-8fa3-97d6910105b3" 00:18:30.186 }, 00:18:30.186 { 00:18:30.186 "nsid": 2, 00:18:30.186 "bdev_name": "Malloc3", 00:18:30.186 "name": "Malloc3", 00:18:30.186 "nguid": "EF52FB38259F49FBAD35A8D5770DB3B0", 00:18:30.186 "uuid": "ef52fb38-259f-49fb-ad35-a8d5770db3b0" 00:18:30.186 } 00:18:30.186 ] 00:18:30.186 }, 00:18:30.186 { 00:18:30.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.186 "subtype": "NVMe", 00:18:30.186 "listen_addresses": [ 00:18:30.186 { 00:18:30.186 "trtype": "VFIOUSER", 00:18:30.186 "adrfam": "IPv4", 00:18:30.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.186 "trsvcid": "0" 00:18:30.186 } 00:18:30.186 ], 00:18:30.186 "allow_any_host": true, 00:18:30.186 "hosts": [], 00:18:30.186 "serial_number": "SPDK2", 00:18:30.186 "model_number": "SPDK bdev Controller", 00:18:30.186 "max_namespaces": 32, 00:18:30.186 "min_cntlid": 1, 00:18:30.186 "max_cntlid": 65519, 00:18:30.186 "namespaces": [ 00:18:30.186 { 00:18:30.186 "nsid": 1, 00:18:30.186 "bdev_name": "Malloc2", 00:18:30.186 "name": "Malloc2", 00:18:30.186 "nguid": "AF2BEA525AC14D9C8009018CBE5FCBA1", 00:18:30.186 "uuid": "af2bea52-5ac1-4d9c-8009-018cbe5fcba1" 00:18:30.186 } 00:18:30.186 ] 00:18:30.186 } 00:18:30.186 ] 00:18:30.186 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1365500 00:18:30.186 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:30.186 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:30.186 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:30.186 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:30.186 [2024-11-02 14:34:22.103520] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:30.186 [2024-11-02 14:34:22.103570] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365524 ] 00:18:30.186 [2024-11-02 14:34:22.135183] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:30.186 [2024-11-02 14:34:22.144592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:30.186 [2024-11-02 14:34:22.144637] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f04c7197000 00:18:30.186 [2024-11-02 14:34:22.145588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.146593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.147613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.148605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.149605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.150618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.151640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.152629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:30.186 [2024-11-02 14:34:22.153641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:30.186 [2024-11-02 14:34:22.153663] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f04c5e8f000 00:18:30.186 [2024-11-02 14:34:22.154779] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:30.186 [2024-11-02 14:34:22.169972] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:30.186 [2024-11-02 14:34:22.170008] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:30.186 [2024-11-02 14:34:22.175115] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:30.186 [2024-11-02 14:34:22.175171] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:30.186 [2024-11-02 14:34:22.175282] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:30.186 [2024-11-02 14:34:22.175329] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:30.186 [2024-11-02 14:34:22.175341] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:30.186 [2024-11-02 14:34:22.176120] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:30.186 [2024-11-02 14:34:22.176141] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:30.186 [2024-11-02 14:34:22.176153] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:30.186 [2024-11-02 14:34:22.177128] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:30.186 [2024-11-02 14:34:22.177149] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:30.186 [2024-11-02 14:34:22.177163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.178140] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:30.186 [2024-11-02 14:34:22.178161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.179145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:30.186 [2024-11-02 14:34:22.179165] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:30.186 [2024-11-02 14:34:22.179175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.179186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.179306] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:30.186 [2024-11-02 14:34:22.179317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.179325] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:30.186 [2024-11-02 14:34:22.180154] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:30.186 [2024-11-02 14:34:22.181159] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:30.186 [2024-11-02 14:34:22.182172] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:30.186 [2024-11-02 14:34:22.183168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.186 [2024-11-02 14:34:22.183249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:30.186 [2024-11-02 14:34:22.184187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:30.186 [2024-11-02 14:34:22.184207] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:30.186 [2024-11-02 14:34:22.184216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:30.186 [2024-11-02 14:34:22.184252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:30.187 [2024-11-02 14:34:22.184275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.184300] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:30.187 [2024-11-02 14:34:22.184317] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:30.187 [2024-11-02 14:34:22.184324] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.187 [2024-11-02 14:34:22.184343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.192275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.192311] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:30.187 [2024-11-02 14:34:22.192321] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:30.187 [2024-11-02 14:34:22.192329] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:30.187 [2024-11-02 14:34:22.192340] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:30.187 [2024-11-02 14:34:22.192349] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:30.187 [2024-11-02 14:34:22.192356] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:30.187 [2024-11-02 14:34:22.192365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.192377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.192393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.200270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.200295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.187 [2024-11-02 14:34:22.200309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.187 [2024-11-02 14:34:22.200321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.187 [2024-11-02 14:34:22.200338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.187 [2024-11-02 14:34:22.200347] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.200363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.200378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.208268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.208287] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:30.187 [2024-11-02 14:34:22.208296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.208307] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.208332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.208347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.216272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.216357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.216374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.216386] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:30.187 [2024-11-02 14:34:22.216399] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:30.187 [2024-11-02 14:34:22.216405] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.187 [2024-11-02 14:34:22.216415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.224268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.224291] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:30.187 [2024-11-02 14:34:22.224323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.224337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.224350] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:30.187 [2024-11-02 14:34:22.224358] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:30.187 [2024-11-02 14:34:22.224364] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.187 [2024-11-02 14:34:22.224374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:30.187 [2024-11-02 14:34:22.232271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:30.187 [2024-11-02 14:34:22.232310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.232327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:30.187 [2024-11-02 14:34:22.232340] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:30.187 [2024-11-02 14:34:22.232348] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:30.187 [2024-11-02 14:34:22.232354] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.187 [2024-11-02 14:34:22.232363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:30.446 [2024-11-02 14:34:22.240273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:30.446 [2024-11-02 14:34:22.240309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240376] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:30.446 [2024-11-02 14:34:22.240390] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:30.446 [2024-11-02 14:34:22.240400] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:30.446 [2024-11-02 14:34:22.240424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:30.446 [2024-11-02 14:34:22.248271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:30.446 [2024-11-02 14:34:22.248300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:30.446 [2024-11-02 14:34:22.256266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:30.446 [2024-11-02 14:34:22.256292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:30.446 [2024-11-02 14:34:22.264268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:30.446 [2024-11-02 14:34:22.264293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:30.446 [2024-11-02 14:34:22.272279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:30.446 [2024-11-02 14:34:22.272313] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:30.446 [2024-11-02 14:34:22.272324] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:30.446 [2024-11-02 14:34:22.272331] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:30.446 [2024-11-02 14:34:22.272337] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:30.446 [2024-11-02 14:34:22.272343] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:30.447 [2024-11-02 14:34:22.272353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:30.447 [2024-11-02 14:34:22.272364] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:30.447 [2024-11-02 14:34:22.272372] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:30.447 [2024-11-02 14:34:22.272378] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.447 [2024-11-02 14:34:22.272387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:30.447 [2024-11-02 14:34:22.272397] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:30.447 [2024-11-02 14:34:22.272405] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:30.447 [2024-11-02 14:34:22.272411] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.447 [2024-11-02 14:34:22.272419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:30.447 [2024-11-02 14:34:22.272430] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:30.447 [2024-11-02 14:34:22.272438] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:30.447 [2024-11-02 14:34:22.272444] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:30.447 [2024-11-02 14:34:22.272452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:30.447 [2024-11-02 14:34:22.280266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:30.447 [2024-11-02 14:34:22.280297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:30.447 [2024-11-02 14:34:22.280316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:30.447 [2024-11-02 14:34:22.280328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:30.447 ===================================================== 00:18:30.447 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:30.447 ===================================================== 00:18:30.447 Controller Capabilities/Features 00:18:30.447 ================================ 00:18:30.447 Vendor ID: 4e58 00:18:30.447 Subsystem Vendor ID: 4e58 00:18:30.447 Serial Number: SPDK2 00:18:30.447 Model Number: SPDK bdev Controller 00:18:30.447 Firmware Version: 24.09.1 00:18:30.447 Recommended Arb Burst: 6 00:18:30.447 IEEE OUI Identifier: 8d 6b 50 00:18:30.447 Multi-path I/O 00:18:30.447 May have multiple subsystem ports: Yes 00:18:30.447 May have multiple controllers: Yes 00:18:30.447 Associated with SR-IOV VF: No 00:18:30.447 Max Data Transfer Size: 131072 00:18:30.447 Max Number of Namespaces: 32 00:18:30.447 Max Number of I/O Queues: 127 00:18:30.447 NVMe Specification Version (VS): 1.3 00:18:30.447 NVMe Specification Version (Identify): 1.3 00:18:30.447 Maximum Queue Entries: 256 00:18:30.447 Contiguous Queues Required: Yes 00:18:30.447 Arbitration Mechanisms Supported 00:18:30.447 Weighted Round Robin: Not Supported 00:18:30.447 Vendor Specific: Not Supported 00:18:30.447 Reset Timeout: 15000 ms 00:18:30.447 Doorbell Stride: 4 bytes 00:18:30.447 NVM Subsystem Reset: Not Supported 00:18:30.447 Command Sets Supported 00:18:30.447 NVM Command Set: Supported 00:18:30.447 Boot Partition: Not Supported 00:18:30.447 Memory Page Size Minimum: 4096 bytes 00:18:30.447 Memory Page Size Maximum: 4096 bytes 00:18:30.447 Persistent Memory Region: Not Supported 00:18:30.447 Optional Asynchronous Events Supported 00:18:30.447 Namespace Attribute Notices: Supported 00:18:30.447 Firmware Activation Notices: Not Supported 00:18:30.447 ANA Change Notices: Not Supported 00:18:30.447 PLE Aggregate Log Change Notices: Not Supported 00:18:30.447 LBA Status Info Alert Notices: Not Supported 00:18:30.447 EGE Aggregate Log Change Notices: Not Supported 00:18:30.447 Normal NVM Subsystem Shutdown event: Not Supported 00:18:30.447 Zone Descriptor Change Notices: Not Supported 00:18:30.447 Discovery Log Change Notices: Not Supported 00:18:30.447 Controller Attributes 00:18:30.447 128-bit Host Identifier: Supported 00:18:30.447 Non-Operational Permissive Mode: Not Supported 00:18:30.447 NVM Sets: Not Supported 00:18:30.447 Read Recovery Levels: Not Supported 00:18:30.447 Endurance Groups: Not Supported 00:18:30.447 Predictable Latency Mode: Not Supported 00:18:30.447 Traffic Based Keep ALive: Not Supported 00:18:30.447 Namespace Granularity: Not Supported 00:18:30.447 SQ Associations: Not Supported 00:18:30.447 UUID List: Not Supported 00:18:30.447 Multi-Domain Subsystem: Not Supported 00:18:30.447 Fixed Capacity Management: Not Supported 00:18:30.447 Variable Capacity Management: Not Supported 00:18:30.447 Delete Endurance Group: Not Supported 00:18:30.447 Delete NVM Set: Not Supported 00:18:30.447 Extended LBA Formats Supported: Not Supported 00:18:30.447 Flexible Data Placement Supported: Not Supported 00:18:30.447 00:18:30.447 Controller Memory Buffer Support 00:18:30.447 ================================ 00:18:30.447 Supported: No 00:18:30.447 00:18:30.447 Persistent Memory Region Support 00:18:30.447 ================================ 00:18:30.447 Supported: No 00:18:30.447 00:18:30.447 Admin Command Set Attributes 00:18:30.447 ============================ 00:18:30.447 Security Send/Receive: Not Supported 00:18:30.447 Format NVM: Not Supported 00:18:30.447 Firmware Activate/Download: Not Supported 00:18:30.447 Namespace Management: Not Supported 00:18:30.447 Device Self-Test: Not Supported 00:18:30.447 Directives: Not Supported 00:18:30.447 NVMe-MI: Not Supported 00:18:30.447 Virtualization Management: Not Supported 00:18:30.447 Doorbell Buffer Config: Not Supported 00:18:30.447 Get LBA Status Capability: Not Supported 00:18:30.447 Command & Feature Lockdown Capability: Not Supported 00:18:30.447 Abort Command Limit: 4 00:18:30.447 Async Event Request Limit: 4 00:18:30.447 Number of Firmware Slots: N/A 00:18:30.447 Firmware Slot 1 Read-Only: N/A 00:18:30.447 Firmware Activation Without Reset: N/A 00:18:30.447 Multiple Update Detection Support: N/A 00:18:30.447 Firmware Update Granularity: No Information Provided 00:18:30.447 Per-Namespace SMART Log: No 00:18:30.447 Asymmetric Namespace Access Log Page: Not Supported 00:18:30.447 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:30.447 Command Effects Log Page: Supported 00:18:30.447 Get Log Page Extended Data: Supported 00:18:30.447 Telemetry Log Pages: Not Supported 00:18:30.447 Persistent Event Log Pages: Not Supported 00:18:30.447 Supported Log Pages Log Page: May Support 00:18:30.447 Commands Supported & Effects Log Page: Not Supported 00:18:30.447 Feature Identifiers & Effects Log Page:May Support 00:18:30.447 NVMe-MI Commands & Effects Log Page: May Support 00:18:30.447 Data Area 4 for Telemetry Log: Not Supported 00:18:30.447 Error Log Page Entries Supported: 128 00:18:30.447 Keep Alive: Supported 00:18:30.447 Keep Alive Granularity: 10000 ms 00:18:30.447 00:18:30.447 NVM Command Set Attributes 00:18:30.447 ========================== 00:18:30.447 Submission Queue Entry Size 00:18:30.447 Max: 64 00:18:30.447 Min: 64 00:18:30.447 Completion Queue Entry Size 00:18:30.447 Max: 16 00:18:30.447 Min: 16 00:18:30.447 Number of Namespaces: 32 00:18:30.447 Compare Command: Supported 00:18:30.447 Write Uncorrectable Command: Not Supported 00:18:30.447 Dataset Management Command: Supported 00:18:30.447 Write Zeroes Command: Supported 00:18:30.447 Set Features Save Field: Not Supported 00:18:30.447 Reservations: Not Supported 00:18:30.447 Timestamp: Not Supported 00:18:30.447 Copy: Supported 00:18:30.447 Volatile Write Cache: Present 00:18:30.447 Atomic Write Unit (Normal): 1 00:18:30.447 Atomic Write Unit (PFail): 1 00:18:30.447 Atomic Compare & Write Unit: 1 00:18:30.447 Fused Compare & Write: Supported 00:18:30.447 Scatter-Gather List 00:18:30.447 SGL Command Set: Supported (Dword aligned) 00:18:30.447 SGL Keyed: Not Supported 00:18:30.447 SGL Bit Bucket Descriptor: Not Supported 00:18:30.447 SGL Metadata Pointer: Not Supported 00:18:30.447 Oversized SGL: Not Supported 00:18:30.447 SGL Metadata Address: Not Supported 00:18:30.447 SGL Offset: Not Supported 00:18:30.447 Transport SGL Data Block: Not Supported 00:18:30.447 Replay Protected Memory Block: Not Supported 00:18:30.447 00:18:30.447 Firmware Slot Information 00:18:30.447 ========================= 00:18:30.447 Active slot: 1 00:18:30.447 Slot 1 Firmware Revision: 24.09.1 00:18:30.447 00:18:30.447 00:18:30.447 Commands Supported and Effects 00:18:30.447 ============================== 00:18:30.447 Admin Commands 00:18:30.447 -------------- 00:18:30.447 Get Log Page (02h): Supported 00:18:30.447 Identify (06h): Supported 00:18:30.447 Abort (08h): Supported 00:18:30.447 Set Features (09h): Supported 00:18:30.447 Get Features (0Ah): Supported 00:18:30.447 Asynchronous Event Request (0Ch): Supported 00:18:30.447 Keep Alive (18h): Supported 00:18:30.447 I/O Commands 00:18:30.447 ------------ 00:18:30.447 Flush (00h): Supported LBA-Change 00:18:30.447 Write (01h): Supported LBA-Change 00:18:30.448 Read (02h): Supported 00:18:30.448 Compare (05h): Supported 00:18:30.448 Write Zeroes (08h): Supported LBA-Change 00:18:30.448 Dataset Management (09h): Supported LBA-Change 00:18:30.448 Copy (19h): Supported LBA-Change 00:18:30.448 00:18:30.448 Error Log 00:18:30.448 ========= 00:18:30.448 00:18:30.448 Arbitration 00:18:30.448 =========== 00:18:30.448 Arbitration Burst: 1 00:18:30.448 00:18:30.448 Power Management 00:18:30.448 ================ 00:18:30.448 Number of Power States: 1 00:18:30.448 Current Power State: Power State #0 00:18:30.448 Power State #0: 00:18:30.448 Max Power: 0.00 W 00:18:30.448 Non-Operational State: Operational 00:18:30.448 Entry Latency: Not Reported 00:18:30.448 Exit Latency: Not Reported 00:18:30.448 Relative Read Throughput: 0 00:18:30.448 Relative Read Latency: 0 00:18:30.448 Relative Write Throughput: 0 00:18:30.448 Relative Write Latency: 0 00:18:30.448 Idle Power: Not Reported 00:18:30.448 Active Power: Not Reported 00:18:30.448 Non-Operational Permissive Mode: Not Supported 00:18:30.448 00:18:30.448 Health Information 00:18:30.448 ================== 00:18:30.448 Critical Warnings: 00:18:30.448 Available Spare Space: OK 00:18:30.448 Temperature: OK 00:18:30.448 Device Reliability: OK 00:18:30.448 Read Only: No 00:18:30.448 Volatile Memory Backup: OK 00:18:30.448 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:30.448 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:30.448 Available Spare: 0% 00:18:30.448 Availabl[2024-11-02 14:34:22.280441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:30.448 [2024-11-02 14:34:22.288269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:30.448 [2024-11-02 14:34:22.288318] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:30.448 [2024-11-02 14:34:22.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.448 [2024-11-02 14:34:22.288347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.448 [2024-11-02 14:34:22.288356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.448 [2024-11-02 14:34:22.288366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.448 [2024-11-02 14:34:22.288451] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:30.448 [2024-11-02 14:34:22.288472] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:30.448 [2024-11-02 14:34:22.289452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.448 [2024-11-02 14:34:22.289522] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:30.448 [2024-11-02 14:34:22.289537] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:30.448 [2024-11-02 14:34:22.290466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:30.448 [2024-11-02 14:34:22.290490] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:30.448 [2024-11-02 14:34:22.290565] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:30.448 [2024-11-02 14:34:22.291765] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:30.448 e Spare Threshold: 0% 00:18:30.448 Life Percentage Used: 0% 00:18:30.448 Data Units Read: 0 00:18:30.448 Data Units Written: 0 00:18:30.448 Host Read Commands: 0 00:18:30.448 Host Write Commands: 0 00:18:30.448 Controller Busy Time: 0 minutes 00:18:30.448 Power Cycles: 0 00:18:30.448 Power On Hours: 0 hours 00:18:30.448 Unsafe Shutdowns: 0 00:18:30.448 Unrecoverable Media Errors: 0 00:18:30.448 Lifetime Error Log Entries: 0 00:18:30.448 Warning Temperature Time: 0 minutes 00:18:30.448 Critical Temperature Time: 0 minutes 00:18:30.448 00:18:30.448 Number of Queues 00:18:30.448 ================ 00:18:30.448 Number of I/O Submission Queues: 127 00:18:30.448 Number of I/O Completion Queues: 127 00:18:30.448 00:18:30.448 Active Namespaces 00:18:30.448 ================= 00:18:30.448 Namespace ID:1 00:18:30.448 Error Recovery Timeout: Unlimited 00:18:30.448 Command Set Identifier: NVM (00h) 00:18:30.448 Deallocate: Supported 00:18:30.448 Deallocated/Unwritten Error: Not Supported 00:18:30.448 Deallocated Read Value: Unknown 00:18:30.448 Deallocate in Write Zeroes: Not Supported 00:18:30.448 Deallocated Guard Field: 0xFFFF 00:18:30.448 Flush: Supported 00:18:30.448 Reservation: Supported 00:18:30.448 Namespace Sharing Capabilities: Multiple Controllers 00:18:30.448 Size (in LBAs): 131072 (0GiB) 00:18:30.448 Capacity (in LBAs): 131072 (0GiB) 00:18:30.448 Utilization (in LBAs): 131072 (0GiB) 00:18:30.448 NGUID: AF2BEA525AC14D9C8009018CBE5FCBA1 00:18:30.448 UUID: af2bea52-5ac1-4d9c-8009-018cbe5fcba1 00:18:30.448 Thin Provisioning: Not Supported 00:18:30.448 Per-NS Atomic Units: Yes 00:18:30.448 Atomic Boundary Size (Normal): 0 00:18:30.448 Atomic Boundary Size (PFail): 0 00:18:30.448 Atomic Boundary Offset: 0 00:18:30.448 Maximum Single Source Range Length: 65535 00:18:30.448 Maximum Copy Length: 65535 00:18:30.448 Maximum Source Range Count: 1 00:18:30.448 NGUID/EUI64 Never Reused: No 00:18:30.448 Namespace Write Protected: No 00:18:30.448 Number of LBA Formats: 1 00:18:30.448 Current LBA Format: LBA Format #00 00:18:30.448 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:30.448 00:18:30.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:30.706 [2024-11-02 14:34:22.521088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.983 Initializing NVMe Controllers 00:18:35.983 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:35.983 Initialization complete. Launching workers. 00:18:35.983 ======================================================== 00:18:35.983 Latency(us) 00:18:35.983 Device Information : IOPS MiB/s Average min max 00:18:35.983 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33656.80 131.47 3802.39 1188.45 8640.64 00:18:35.983 ======================================================== 00:18:35.983 Total : 33656.80 131.47 3802.39 1188.45 8640.64 00:18:35.983 00:18:35.983 [2024-11-02 14:34:27.623661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.983 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:35.983 [2024-11-02 14:34:27.870301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.263 Initializing NVMe Controllers 00:18:41.263 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:41.263 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:41.263 Initialization complete. Launching workers. 00:18:41.263 ======================================================== 00:18:41.263 Latency(us) 00:18:41.263 Device Information : IOPS MiB/s Average min max 00:18:41.263 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31221.30 121.96 4098.87 1201.44 9302.55 00:18:41.263 ======================================================== 00:18:41.263 Total : 31221.30 121.96 4098.87 1201.44 9302.55 00:18:41.263 00:18:41.263 [2024-11-02 14:34:32.894714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.263 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:41.263 [2024-11-02 14:34:33.106147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.540 [2024-11-02 14:34:38.243416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.540 Initializing NVMe Controllers 00:18:46.540 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:46.540 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:46.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:46.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:46.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:46.540 Initialization complete. Launching workers. 00:18:46.540 Starting thread on core 2 00:18:46.540 Starting thread on core 3 00:18:46.540 Starting thread on core 1 00:18:46.540 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:46.540 [2024-11-02 14:34:38.534753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.832 [2024-11-02 14:34:41.598839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.832 Initializing NVMe Controllers 00:18:49.832 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.832 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.832 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:49.832 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:49.832 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:49.832 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:49.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:49.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:49.832 Initialization complete. Launching workers. 00:18:49.832 Starting thread on core 1 with urgent priority queue 00:18:49.832 Starting thread on core 2 with urgent priority queue 00:18:49.832 Starting thread on core 3 with urgent priority queue 00:18:49.832 Starting thread on core 0 with urgent priority queue 00:18:49.832 SPDK bdev Controller (SPDK2 ) core 0: 5433.67 IO/s 18.40 secs/100000 ios 00:18:49.832 SPDK bdev Controller (SPDK2 ) core 1: 5302.33 IO/s 18.86 secs/100000 ios 00:18:49.832 SPDK bdev Controller (SPDK2 ) core 2: 5672.00 IO/s 17.63 secs/100000 ios 00:18:49.832 SPDK bdev Controller (SPDK2 ) core 3: 5936.33 IO/s 16.85 secs/100000 ios 00:18:49.832 ======================================================== 00:18:49.832 00:18:49.832 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:50.091 [2024-11-02 14:34:41.898791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:50.091 Initializing NVMe Controllers 00:18:50.091 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:50.091 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:50.091 Namespace ID: 1 size: 0GB 00:18:50.091 Initialization complete. 00:18:50.091 INFO: using host memory buffer for IO 00:18:50.091 Hello world! 00:18:50.091 [2024-11-02 14:34:41.912027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:50.091 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:50.352 [2024-11-02 14:34:42.210286] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.289 Initializing NVMe Controllers 00:18:51.289 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:51.289 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:51.289 Initialization complete. Launching workers. 00:18:51.289 submit (in ns) avg, min, max = 7587.9, 3498.9, 4017244.4 00:18:51.289 complete (in ns) avg, min, max = 26588.0, 2043.3, 8004875.6 00:18:51.289 00:18:51.289 Submit histogram 00:18:51.289 ================ 00:18:51.289 Range in us Cumulative Count 00:18:51.289 3.484 - 3.508: 0.1834% ( 24) 00:18:51.289 3.508 - 3.532: 1.3601% ( 154) 00:18:51.289 3.532 - 3.556: 3.5073% ( 281) 00:18:51.289 3.556 - 3.579: 8.7491% ( 686) 00:18:51.289 3.579 - 3.603: 17.1774% ( 1103) 00:18:51.289 3.603 - 3.627: 27.5923% ( 1363) 00:18:51.289 3.627 - 3.650: 37.4341% ( 1288) 00:18:51.289 3.650 - 3.674: 44.8537% ( 971) 00:18:51.289 3.674 - 3.698: 50.7144% ( 767) 00:18:51.289 3.698 - 3.721: 56.1702% ( 714) 00:18:51.289 3.721 - 3.745: 60.4264% ( 557) 00:18:51.289 3.745 - 3.769: 63.9337% ( 459) 00:18:51.289 3.769 - 3.793: 66.8144% ( 377) 00:18:51.289 3.793 - 3.816: 69.9014% ( 404) 00:18:51.289 3.816 - 3.840: 73.6227% ( 487) 00:18:51.289 3.840 - 3.864: 78.1080% ( 587) 00:18:51.289 3.864 - 3.887: 81.8751% ( 493) 00:18:51.289 3.887 - 3.911: 84.5954% ( 356) 00:18:51.289 3.911 - 3.935: 86.9260% ( 305) 00:18:51.289 3.935 - 3.959: 88.6299% ( 223) 00:18:51.289 3.959 - 3.982: 89.9748% ( 176) 00:18:51.289 3.982 - 4.006: 90.9911% ( 133) 00:18:51.289 4.006 - 4.030: 91.7934% ( 105) 00:18:51.289 4.030 - 4.053: 92.6798% ( 116) 00:18:51.289 4.053 - 4.077: 93.3445% ( 87) 00:18:51.289 4.077 - 4.101: 94.0093% ( 87) 00:18:51.289 4.101 - 4.124: 94.6817% ( 88) 00:18:51.289 4.124 - 4.148: 95.1479% ( 61) 00:18:51.289 4.148 - 4.172: 95.4841% ( 44) 00:18:51.289 4.172 - 4.196: 95.7592% ( 36) 00:18:51.289 4.196 - 4.219: 96.0495% ( 38) 00:18:51.289 4.219 - 4.243: 96.2482% ( 26) 00:18:51.289 4.243 - 4.267: 96.3934% ( 19) 00:18:51.289 4.267 - 4.290: 96.4851% ( 12) 00:18:51.289 4.290 - 4.314: 96.5997% ( 15) 00:18:51.289 4.314 - 4.338: 96.6684% ( 9) 00:18:51.289 4.338 - 4.361: 96.7601% ( 12) 00:18:51.289 4.361 - 4.385: 96.8518% ( 12) 00:18:51.289 4.385 - 4.409: 96.9282% ( 10) 00:18:51.289 4.409 - 4.433: 97.0047% ( 10) 00:18:51.289 4.433 - 4.456: 97.0734% ( 9) 00:18:51.289 4.456 - 4.480: 97.0964% ( 3) 00:18:51.289 4.480 - 4.504: 97.1575% ( 8) 00:18:51.289 4.504 - 4.527: 97.2110% ( 7) 00:18:51.289 4.527 - 4.551: 97.2415% ( 4) 00:18:51.289 4.551 - 4.575: 97.2568% ( 2) 00:18:51.289 4.575 - 4.599: 97.2797% ( 3) 00:18:51.289 4.599 - 4.622: 97.2874% ( 1) 00:18:51.289 4.622 - 4.646: 97.2950% ( 1) 00:18:51.289 4.646 - 4.670: 97.3179% ( 3) 00:18:51.289 4.670 - 4.693: 97.3256% ( 1) 00:18:51.289 4.693 - 4.717: 97.3485% ( 3) 00:18:51.289 4.717 - 4.741: 97.3638% ( 2) 00:18:51.289 4.741 - 4.764: 97.3791% ( 2) 00:18:51.289 4.764 - 4.788: 97.4173% ( 5) 00:18:51.289 4.788 - 4.812: 97.4478% ( 4) 00:18:51.289 4.812 - 4.836: 97.5013% ( 7) 00:18:51.289 4.836 - 4.859: 97.5548% ( 7) 00:18:51.289 4.859 - 4.883: 97.5854% ( 4) 00:18:51.289 4.883 - 4.907: 97.6312% ( 6) 00:18:51.289 4.907 - 4.930: 97.6847% ( 7) 00:18:51.289 4.930 - 4.954: 97.7611% ( 10) 00:18:51.289 4.954 - 4.978: 97.8299% ( 9) 00:18:51.289 4.978 - 5.001: 97.8834% ( 7) 00:18:51.289 5.001 - 5.025: 97.9140% ( 4) 00:18:51.289 5.025 - 5.049: 97.9674% ( 7) 00:18:51.289 5.049 - 5.073: 98.0362% ( 9) 00:18:51.289 5.073 - 5.096: 98.0668% ( 4) 00:18:51.289 5.096 - 5.120: 98.0897% ( 3) 00:18:51.289 5.120 - 5.144: 98.1126% ( 3) 00:18:51.289 5.144 - 5.167: 98.1279% ( 2) 00:18:51.289 5.167 - 5.191: 98.1585% ( 4) 00:18:51.289 5.191 - 5.215: 98.1890% ( 4) 00:18:51.289 5.215 - 5.239: 98.2196% ( 4) 00:18:51.289 5.239 - 5.262: 98.2349% ( 2) 00:18:51.289 5.262 - 5.286: 98.2502% ( 2) 00:18:51.289 5.286 - 5.310: 98.2578% ( 1) 00:18:51.289 5.310 - 5.333: 98.2655% ( 1) 00:18:51.289 5.333 - 5.357: 98.2960% ( 4) 00:18:51.289 5.357 - 5.381: 98.3037% ( 1) 00:18:51.289 5.381 - 5.404: 98.3266% ( 3) 00:18:51.289 5.404 - 5.428: 98.3342% ( 1) 00:18:51.289 5.428 - 5.452: 98.3419% ( 1) 00:18:51.290 5.476 - 5.499: 98.3495% ( 1) 00:18:51.290 5.499 - 5.523: 98.3571% ( 1) 00:18:51.290 5.547 - 5.570: 98.3648% ( 1) 00:18:51.290 5.594 - 5.618: 98.3801% ( 2) 00:18:51.290 5.618 - 5.641: 98.3877% ( 1) 00:18:51.290 5.641 - 5.665: 98.4030% ( 2) 00:18:51.290 5.665 - 5.689: 98.4106% ( 1) 00:18:51.290 5.689 - 5.713: 98.4183% ( 1) 00:18:51.290 5.760 - 5.784: 98.4336% ( 2) 00:18:51.290 5.784 - 5.807: 98.4412% ( 1) 00:18:51.290 5.807 - 5.831: 98.4488% ( 1) 00:18:51.290 5.831 - 5.855: 98.4565% ( 1) 00:18:51.290 5.926 - 5.950: 98.4641% ( 1) 00:18:51.290 5.950 - 5.973: 98.4718% ( 1) 00:18:51.290 6.044 - 6.068: 98.4794% ( 1) 00:18:51.290 6.116 - 6.163: 98.4870% ( 1) 00:18:51.290 6.163 - 6.210: 98.5023% ( 2) 00:18:51.290 6.258 - 6.305: 98.5100% ( 1) 00:18:51.290 6.400 - 6.447: 98.5176% ( 1) 00:18:51.290 6.495 - 6.542: 98.5253% ( 1) 00:18:51.290 6.542 - 6.590: 98.5405% ( 2) 00:18:51.290 7.016 - 7.064: 98.5482% ( 1) 00:18:51.290 7.348 - 7.396: 98.5558% ( 1) 00:18:51.290 7.633 - 7.680: 98.5635% ( 1) 00:18:51.290 7.680 - 7.727: 98.5711% ( 1) 00:18:51.290 7.917 - 7.964: 98.5864% ( 2) 00:18:51.290 8.107 - 8.154: 98.5940% ( 1) 00:18:51.290 8.249 - 8.296: 98.6246% ( 4) 00:18:51.290 8.344 - 8.391: 98.6322% ( 1) 00:18:51.290 8.391 - 8.439: 98.6399% ( 1) 00:18:51.290 8.439 - 8.486: 98.6552% ( 2) 00:18:51.290 8.533 - 8.581: 98.6704% ( 2) 00:18:51.290 8.770 - 8.818: 98.7010% ( 4) 00:18:51.290 8.865 - 8.913: 98.7086% ( 1) 00:18:51.290 8.913 - 8.960: 98.7163% ( 1) 00:18:51.290 8.960 - 9.007: 98.7239% ( 1) 00:18:51.290 9.055 - 9.102: 98.7316% ( 1) 00:18:51.290 9.197 - 9.244: 98.7392% ( 1) 00:18:51.290 9.292 - 9.339: 98.7468% ( 1) 00:18:51.290 9.339 - 9.387: 98.7545% ( 1) 00:18:51.290 9.434 - 9.481: 98.7621% ( 1) 00:18:51.290 9.481 - 9.529: 98.7698% ( 1) 00:18:51.290 9.529 - 9.576: 98.7927% ( 3) 00:18:51.290 9.576 - 9.624: 98.8003% ( 1) 00:18:51.290 9.624 - 9.671: 98.8233% ( 3) 00:18:51.290 9.671 - 9.719: 98.8309% ( 1) 00:18:51.290 9.719 - 9.766: 98.8385% ( 1) 00:18:51.290 9.908 - 9.956: 98.8462% ( 1) 00:18:51.290 10.240 - 10.287: 98.8615% ( 2) 00:18:51.290 10.335 - 10.382: 98.8691% ( 1) 00:18:51.290 10.714 - 10.761: 98.8767% ( 1) 00:18:51.290 10.856 - 10.904: 98.8844% ( 1) 00:18:51.290 11.378 - 11.425: 98.8920% ( 1) 00:18:51.290 11.425 - 11.473: 98.8997% ( 1) 00:18:51.290 11.899 - 11.947: 98.9073% ( 1) 00:18:51.290 11.947 - 11.994: 98.9226% ( 2) 00:18:51.290 12.516 - 12.610: 98.9302% ( 1) 00:18:51.290 12.705 - 12.800: 98.9379% ( 1) 00:18:51.290 12.990 - 13.084: 98.9455% ( 1) 00:18:51.290 13.369 - 13.464: 98.9532% ( 1) 00:18:51.290 13.559 - 13.653: 98.9608% ( 1) 00:18:51.290 13.748 - 13.843: 98.9684% ( 1) 00:18:51.290 13.938 - 14.033: 98.9761% ( 1) 00:18:51.290 14.033 - 14.127: 98.9837% ( 1) 00:18:51.290 14.696 - 14.791: 98.9990% ( 2) 00:18:51.290 16.972 - 17.067: 99.0066% ( 1) 00:18:51.290 17.067 - 17.161: 99.0296% ( 3) 00:18:51.290 17.256 - 17.351: 99.0525% ( 3) 00:18:51.290 17.351 - 17.446: 99.0754% ( 3) 00:18:51.290 17.446 - 17.541: 99.0831% ( 1) 00:18:51.290 17.541 - 17.636: 99.1136% ( 4) 00:18:51.290 17.636 - 17.730: 99.1595% ( 6) 00:18:51.290 17.730 - 17.825: 99.2130% ( 7) 00:18:51.290 17.825 - 17.920: 99.2359% ( 3) 00:18:51.290 17.920 - 18.015: 99.2741% ( 5) 00:18:51.290 18.015 - 18.110: 99.3276% ( 7) 00:18:51.290 18.110 - 18.204: 99.3887% ( 8) 00:18:51.290 18.204 - 18.299: 99.4422% ( 7) 00:18:51.290 18.299 - 18.394: 99.5033% ( 8) 00:18:51.290 18.394 - 18.489: 99.5492% ( 6) 00:18:51.290 18.489 - 18.584: 99.6409% ( 12) 00:18:51.290 18.584 - 18.679: 99.6944% ( 7) 00:18:51.290 18.679 - 18.773: 99.7173% ( 3) 00:18:51.290 18.773 - 18.868: 99.7708% ( 7) 00:18:51.290 18.868 - 18.963: 99.8013% ( 4) 00:18:51.290 18.963 - 19.058: 99.8090% ( 1) 00:18:51.290 19.058 - 19.153: 99.8166% ( 1) 00:18:51.290 19.153 - 19.247: 99.8319% ( 2) 00:18:51.290 19.247 - 19.342: 99.8548% ( 3) 00:18:51.290 19.342 - 19.437: 99.8625% ( 1) 00:18:51.290 19.437 - 19.532: 99.8701% ( 1) 00:18:51.290 19.532 - 19.627: 99.8777% ( 1) 00:18:51.290 19.816 - 19.911: 99.8854% ( 1) 00:18:51.290 21.239 - 21.333: 99.8930% ( 1) 00:18:51.290 28.824 - 29.013: 99.9007% ( 1) 00:18:51.290 36.978 - 37.167: 99.9083% ( 1) 00:18:51.290 3980.705 - 4004.978: 99.9694% ( 8) 00:18:51.290 4004.978 - 4029.250: 100.0000% ( 4) 00:18:51.290 00:18:51.290 Complete histogram 00:18:51.290 ================== 00:18:51.290 Range in us Cumulative Count 00:18:51.290 2.039 - 2.050: 2.8196% ( 369) 00:18:51.290 2.050 - 2.062: 35.3175% ( 4253) 00:18:51.290 2.062 - 2.074: 49.2779% ( 1827) 00:18:51.290 2.074 - 2.086: 51.9294% ( 347) 00:18:51.290 2.086 - 2.098: 58.7682% ( 895) 00:18:51.290 2.098 - 2.110: 61.8935% ( 409) 00:18:51.290 2.110 - 2.121: 66.8984% ( 655) 00:18:51.290 2.121 - 2.133: 76.0296% ( 1195) 00:18:51.290 2.133 - 2.145: 78.1768% ( 281) 00:18:51.290 2.145 - 2.157: 79.8120% ( 214) 00:18:51.290 2.157 - 2.169: 82.0127% ( 288) 00:18:51.290 2.169 - 2.181: 82.9908% ( 128) 00:18:51.290 2.181 - 2.193: 84.8552% ( 244) 00:18:51.290 2.193 - 2.204: 88.1103% ( 426) 00:18:51.290 2.204 - 2.216: 90.4638% ( 308) 00:18:51.290 2.216 - 2.228: 92.1067% ( 215) 00:18:51.290 2.228 - 2.240: 93.0083% ( 118) 00:18:51.290 2.240 - 2.252: 93.4439% ( 57) 00:18:51.290 2.252 - 2.264: 93.7877% ( 45) 00:18:51.295 2.264 - 2.276: 94.0934% ( 40) 00:18:51.295 2.276 - 2.287: 94.7964% ( 92) 00:18:51.295 2.287 - 2.299: 95.1402% ( 45) 00:18:51.295 2.299 - 2.311: 95.2166% ( 10) 00:18:51.295 2.311 - 2.323: 95.2701% ( 7) 00:18:51.295 2.323 - 2.335: 95.3160% ( 6) 00:18:51.295 2.335 - 2.347: 95.3924% ( 10) 00:18:51.295 2.347 - 2.359: 95.4841% ( 12) 00:18:51.295 2.359 - 2.370: 95.7439% ( 34) 00:18:51.295 2.370 - 2.382: 95.9120% ( 22) 00:18:51.295 2.382 - 2.394: 96.0877% ( 23) 00:18:51.295 2.394 - 2.406: 96.2482% ( 21) 00:18:51.295 2.406 - 2.418: 96.3934% ( 19) 00:18:51.295 2.418 - 2.430: 96.6302% ( 31) 00:18:51.295 2.430 - 2.441: 96.8442% ( 28) 00:18:51.295 2.441 - 2.453: 97.0887% ( 32) 00:18:51.295 2.453 - 2.465: 97.2568% ( 22) 00:18:51.295 2.465 - 2.477: 97.3714% ( 15) 00:18:51.295 2.477 - 2.489: 97.5854% ( 28) 00:18:51.295 2.489 - 2.501: 97.7000% ( 15) 00:18:51.295 2.501 - 2.513: 97.7993% ( 13) 00:18:51.295 2.513 - 2.524: 97.8834% ( 11) 00:18:51.295 2.524 - 2.536: 97.9751% ( 12) 00:18:51.295 2.536 - 2.548: 98.0744% ( 13) 00:18:51.295 2.548 - 2.560: 98.1279% ( 7) 00:18:51.295 2.560 - 2.572: 98.1508% ( 3) 00:18:51.295 2.572 - 2.584: 98.1890% ( 5) 00:18:51.295 2.584 - 2.596: 98.2043% ( 2) 00:18:51.295 2.596 - 2.607: 98.2196% ( 2) 00:18:51.295 2.607 - 2.619: 98.2272% ( 1) 00:18:51.295 2.619 - 2.631: 98.2349% ( 1) 00:18:51.295 2.631 - 2.643: 98.2425% ( 1) 00:18:51.295 2.643 - 2.655: 98.2502% ( 1) 00:18:51.295 2.679 - 2.690: 98.2578% ( 1) 00:18:51.295 2.714 - 2.726: 98.2655% ( 1) 00:18:51.295 2.726 - 2.738: 98.2731% ( 1) 00:18:51.295 2.738 - 2.750: 98.2807% ( 1) 00:18:51.295 2.761 - 2.773: 98.2960% ( 2) 00:18:51.295 2.785 - 2.797: 98.3037% ( 1) 00:18:51.295 2.809 - 2.821: 98.3113% ( 1) 00:18:51.295 2.833 - 2.844: 98.3266% ( 2) 00:18:51.295 2.880 - 2.892: 98.3342% ( 1) 00:18:51.295 2.892 - 2.904: 98.3419% ( 1) 00:18:51.295 2.904 - 2.916: 98.3495% ( 1) 00:18:51.295 2.927 - 2.939: 98.3571% ( 1) 00:18:51.295 2.951 - 2.963: 98.3648% ( 1) 00:18:51.295 2.975 - 2.987: 98.3801% ( 2) 00:18:51.295 2.999 - 3.010: 98.3877% ( 1) 00:18:51.295 3.010 - 3.022: 98.3954% ( 1) 00:18:51.295 3.022 - 3.034: 98.4106% ( 2) 00:18:51.295 3.034 - 3.058: 98.4259% ( 2) 00:18:51.295 3.058 - 3.081: 9[2024-11-02 14:34:43.311975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.556 8.4336% ( 1) 00:18:51.556 3.105 - 3.129: 98.4412% ( 1) 00:18:51.556 3.129 - 3.153: 98.4488% ( 1) 00:18:51.556 3.153 - 3.176: 98.4565% ( 1) 00:18:51.556 3.176 - 3.200: 98.4641% ( 1) 00:18:51.556 3.200 - 3.224: 98.4718% ( 1) 00:18:51.556 3.224 - 3.247: 98.4794% ( 1) 00:18:51.556 3.247 - 3.271: 98.4870% ( 1) 00:18:51.556 3.295 - 3.319: 98.5176% ( 4) 00:18:51.556 3.342 - 3.366: 98.5253% ( 1) 00:18:51.556 3.413 - 3.437: 98.5329% ( 1) 00:18:51.556 3.437 - 3.461: 98.5405% ( 1) 00:18:51.556 3.461 - 3.484: 98.5558% ( 2) 00:18:51.556 3.484 - 3.508: 98.5711% ( 2) 00:18:51.556 3.532 - 3.556: 98.6093% ( 5) 00:18:51.556 3.556 - 3.579: 98.6246% ( 2) 00:18:51.556 3.579 - 3.603: 98.6322% ( 1) 00:18:51.556 3.603 - 3.627: 98.6475% ( 2) 00:18:51.556 3.698 - 3.721: 98.6552% ( 1) 00:18:51.556 3.793 - 3.816: 98.6704% ( 2) 00:18:51.556 3.816 - 3.840: 98.6781% ( 1) 00:18:51.556 3.840 - 3.864: 98.7010% ( 3) 00:18:51.556 3.864 - 3.887: 98.7163% ( 2) 00:18:51.556 3.887 - 3.911: 98.7239% ( 1) 00:18:51.556 3.911 - 3.935: 98.7468% ( 3) 00:18:51.556 3.935 - 3.959: 98.7545% ( 1) 00:18:51.556 3.982 - 4.006: 98.7621% ( 1) 00:18:51.556 4.172 - 4.196: 98.7698% ( 1) 00:18:51.556 4.338 - 4.361: 98.7774% ( 1) 00:18:51.556 4.551 - 4.575: 98.7851% ( 1) 00:18:51.556 5.310 - 5.333: 98.7927% ( 1) 00:18:51.556 5.523 - 5.547: 98.8003% ( 1) 00:18:51.556 5.902 - 5.926: 98.8080% ( 1) 00:18:51.556 5.926 - 5.950: 98.8156% ( 1) 00:18:51.556 6.305 - 6.353: 98.8233% ( 1) 00:18:51.556 6.495 - 6.542: 98.8309% ( 1) 00:18:51.556 7.159 - 7.206: 98.8385% ( 1) 00:18:51.556 7.490 - 7.538: 98.8462% ( 1) 00:18:51.556 7.727 - 7.775: 98.8538% ( 1) 00:18:51.556 7.822 - 7.870: 98.8615% ( 1) 00:18:51.556 7.917 - 7.964: 98.8691% ( 1) 00:18:51.556 7.964 - 8.012: 98.8767% ( 1) 00:18:51.556 8.249 - 8.296: 98.8844% ( 1) 00:18:51.556 8.628 - 8.676: 98.8920% ( 1) 00:18:51.556 10.667 - 10.714: 98.8997% ( 1) 00:18:51.556 12.705 - 12.800: 98.9073% ( 1) 00:18:51.556 15.360 - 15.455: 98.9150% ( 1) 00:18:51.556 15.644 - 15.739: 98.9455% ( 4) 00:18:51.556 15.739 - 15.834: 98.9761% ( 4) 00:18:51.556 15.834 - 15.929: 98.9914% ( 2) 00:18:51.556 15.929 - 16.024: 99.0296% ( 5) 00:18:51.556 16.024 - 16.119: 99.0525% ( 3) 00:18:51.556 16.119 - 16.213: 99.0754% ( 3) 00:18:51.556 16.213 - 16.308: 99.0983% ( 3) 00:18:51.556 16.308 - 16.403: 99.1060% ( 1) 00:18:51.556 16.403 - 16.498: 99.1289% ( 3) 00:18:51.556 16.498 - 16.593: 99.1595% ( 4) 00:18:51.556 16.593 - 16.687: 99.1824% ( 3) 00:18:51.556 16.687 - 16.782: 99.2130% ( 4) 00:18:51.556 16.782 - 16.877: 99.2359% ( 3) 00:18:51.556 16.877 - 16.972: 99.2817% ( 6) 00:18:51.556 16.972 - 17.067: 99.3047% ( 3) 00:18:51.556 17.067 - 17.161: 99.3199% ( 2) 00:18:51.556 17.161 - 17.256: 99.3352% ( 2) 00:18:51.556 17.351 - 17.446: 99.3429% ( 1) 00:18:51.556 17.446 - 17.541: 99.3505% ( 1) 00:18:51.556 17.541 - 17.636: 99.3581% ( 1) 00:18:51.556 17.636 - 17.730: 99.3658% ( 1) 00:18:51.556 18.204 - 18.299: 99.3734% ( 1) 00:18:51.557 19.153 - 19.247: 99.3811% ( 1) 00:18:51.557 23.704 - 23.799: 99.3887% ( 1) 00:18:51.557 30.720 - 30.910: 99.3963% ( 1) 00:18:51.557 3106.892 - 3131.164: 99.4040% ( 1) 00:18:51.557 3980.705 - 4004.978: 99.8701% ( 61) 00:18:51.557 4004.978 - 4029.250: 99.9924% ( 16) 00:18:51.557 7961.410 - 8009.956: 100.0000% ( 1) 00:18:51.557 00:18:51.557 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:51.557 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:51.557 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:51.557 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:51.557 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:51.557 [ 00:18:51.557 { 00:18:51.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:51.557 "subtype": "Discovery", 00:18:51.557 "listen_addresses": [], 00:18:51.557 "allow_any_host": true, 00:18:51.557 "hosts": [] 00:18:51.557 }, 00:18:51.557 { 00:18:51.557 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:51.557 "subtype": "NVMe", 00:18:51.557 "listen_addresses": [ 00:18:51.557 { 00:18:51.557 "trtype": "VFIOUSER", 00:18:51.557 "adrfam": "IPv4", 00:18:51.557 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:51.557 "trsvcid": "0" 00:18:51.557 } 00:18:51.557 ], 00:18:51.557 "allow_any_host": true, 00:18:51.557 "hosts": [], 00:18:51.557 "serial_number": "SPDK1", 00:18:51.557 "model_number": "SPDK bdev Controller", 00:18:51.557 "max_namespaces": 32, 00:18:51.557 "min_cntlid": 1, 00:18:51.557 "max_cntlid": 65519, 00:18:51.557 "namespaces": [ 00:18:51.557 { 00:18:51.557 "nsid": 1, 00:18:51.557 "bdev_name": "Malloc1", 00:18:51.557 "name": "Malloc1", 00:18:51.557 "nguid": "27F437B4EE064C188FA397D6910105B3", 00:18:51.557 "uuid": "27f437b4-ee06-4c18-8fa3-97d6910105b3" 00:18:51.557 }, 00:18:51.557 { 00:18:51.557 "nsid": 2, 00:18:51.557 "bdev_name": "Malloc3", 00:18:51.557 "name": "Malloc3", 00:18:51.557 "nguid": "EF52FB38259F49FBAD35A8D5770DB3B0", 00:18:51.557 "uuid": "ef52fb38-259f-49fb-ad35-a8d5770db3b0" 00:18:51.557 } 00:18:51.557 ] 00:18:51.557 }, 00:18:51.557 { 00:18:51.557 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:51.557 "subtype": "NVMe", 00:18:51.557 "listen_addresses": [ 00:18:51.557 { 00:18:51.557 "trtype": "VFIOUSER", 00:18:51.557 "adrfam": "IPv4", 00:18:51.557 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:51.557 "trsvcid": "0" 00:18:51.557 } 00:18:51.557 ], 00:18:51.557 "allow_any_host": true, 00:18:51.557 "hosts": [], 00:18:51.557 "serial_number": "SPDK2", 00:18:51.557 "model_number": "SPDK bdev Controller", 00:18:51.557 "max_namespaces": 32, 00:18:51.557 "min_cntlid": 1, 00:18:51.557 "max_cntlid": 65519, 00:18:51.557 "namespaces": [ 00:18:51.557 { 00:18:51.557 "nsid": 1, 00:18:51.557 "bdev_name": "Malloc2", 00:18:51.557 "name": "Malloc2", 00:18:51.557 "nguid": "AF2BEA525AC14D9C8009018CBE5FCBA1", 00:18:51.557 "uuid": "af2bea52-5ac1-4d9c-8009-018cbe5fcba1" 00:18:51.557 } 00:18:51.557 ] 00:18:51.557 } 00:18:51.557 ] 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1368032 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:51.816 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:51.816 [2024-11-02 14:34:43.785713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.075 Malloc4 00:18:52.075 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:52.334 [2024-11-02 14:34:44.203863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.334 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:52.334 Asynchronous Event Request test 00:18:52.334 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.334 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.334 Registering asynchronous event callbacks... 00:18:52.334 Starting namespace attribute notice tests for all controllers... 00:18:52.334 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:52.334 aer_cb - Changed Namespace 00:18:52.334 Cleaning up... 00:18:52.595 [ 00:18:52.595 { 00:18:52.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:52.595 "subtype": "Discovery", 00:18:52.595 "listen_addresses": [], 00:18:52.595 "allow_any_host": true, 00:18:52.595 "hosts": [] 00:18:52.595 }, 00:18:52.595 { 00:18:52.595 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:52.595 "subtype": "NVMe", 00:18:52.595 "listen_addresses": [ 00:18:52.595 { 00:18:52.595 "trtype": "VFIOUSER", 00:18:52.595 "adrfam": "IPv4", 00:18:52.595 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:52.595 "trsvcid": "0" 00:18:52.595 } 00:18:52.595 ], 00:18:52.595 "allow_any_host": true, 00:18:52.595 "hosts": [], 00:18:52.595 "serial_number": "SPDK1", 00:18:52.595 "model_number": "SPDK bdev Controller", 00:18:52.595 "max_namespaces": 32, 00:18:52.595 "min_cntlid": 1, 00:18:52.595 "max_cntlid": 65519, 00:18:52.595 "namespaces": [ 00:18:52.595 { 00:18:52.595 "nsid": 1, 00:18:52.595 "bdev_name": "Malloc1", 00:18:52.595 "name": "Malloc1", 00:18:52.595 "nguid": "27F437B4EE064C188FA397D6910105B3", 00:18:52.595 "uuid": "27f437b4-ee06-4c18-8fa3-97d6910105b3" 00:18:52.595 }, 00:18:52.595 { 00:18:52.595 "nsid": 2, 00:18:52.595 "bdev_name": "Malloc3", 00:18:52.595 "name": "Malloc3", 00:18:52.595 "nguid": "EF52FB38259F49FBAD35A8D5770DB3B0", 00:18:52.595 "uuid": "ef52fb38-259f-49fb-ad35-a8d5770db3b0" 00:18:52.595 } 00:18:52.595 ] 00:18:52.595 }, 00:18:52.595 { 00:18:52.595 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:52.595 "subtype": "NVMe", 00:18:52.595 "listen_addresses": [ 00:18:52.595 { 00:18:52.595 "trtype": "VFIOUSER", 00:18:52.595 "adrfam": "IPv4", 00:18:52.595 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:52.595 "trsvcid": "0" 00:18:52.595 } 00:18:52.595 ], 00:18:52.595 "allow_any_host": true, 00:18:52.595 "hosts": [], 00:18:52.595 "serial_number": "SPDK2", 00:18:52.595 "model_number": "SPDK bdev Controller", 00:18:52.595 "max_namespaces": 32, 00:18:52.595 "min_cntlid": 1, 00:18:52.595 "max_cntlid": 65519, 00:18:52.595 "namespaces": [ 00:18:52.595 { 00:18:52.595 "nsid": 1, 00:18:52.595 "bdev_name": "Malloc2", 00:18:52.595 "name": "Malloc2", 00:18:52.595 "nguid": "AF2BEA525AC14D9C8009018CBE5FCBA1", 00:18:52.595 "uuid": "af2bea52-5ac1-4d9c-8009-018cbe5fcba1" 00:18:52.595 }, 00:18:52.595 { 00:18:52.595 "nsid": 2, 00:18:52.595 "bdev_name": "Malloc4", 00:18:52.595 "name": "Malloc4", 00:18:52.595 "nguid": "59BCD2C5A32142F4AC448B0CFE77E41F", 00:18:52.595 "uuid": "59bcd2c5-a321-42f4-ac44-8b0cfe77e41f" 00:18:52.595 } 00:18:52.595 ] 00:18:52.595 } 00:18:52.595 ] 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1368032 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1362442 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1362442 ']' 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1362442 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1362442 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1362442' 00:18:52.595 killing process with pid 1362442 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1362442 00:18:52.595 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1362442 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1368239 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1368239' 00:18:52.855 Process pid: 1368239 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1368239 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1368239 ']' 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.855 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:52.855 [2024-11-02 14:34:44.903280] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:52.855 [2024-11-02 14:34:44.904240] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:52.855 [2024-11-02 14:34:44.904336] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.115 [2024-11-02 14:34:44.969475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.115 [2024-11-02 14:34:45.067164] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.115 [2024-11-02 14:34:45.067236] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.115 [2024-11-02 14:34:45.067252] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.115 [2024-11-02 14:34:45.067280] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.115 [2024-11-02 14:34:45.067293] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.115 [2024-11-02 14:34:45.067356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.115 [2024-11-02 14:34:45.067413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.115 [2024-11-02 14:34:45.067452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.115 [2024-11-02 14:34:45.067455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.375 [2024-11-02 14:34:45.176738] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:53.375 [2024-11-02 14:34:45.176972] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:53.375 [2024-11-02 14:34:45.177301] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:53.375 [2024-11-02 14:34:45.177919] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:53.375 [2024-11-02 14:34:45.178178] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:53.375 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.375 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:53.375 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:54.310 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:54.569 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:54.569 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:54.569 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:54.569 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:54.569 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:54.828 Malloc1 00:18:54.828 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:55.086 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:55.653 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:55.653 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:55.653 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:55.653 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:55.912 Malloc2 00:18:56.171 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:56.429 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:56.687 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1368239 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1368239 ']' 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1368239 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1368239 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1368239' 00:18:56.945 killing process with pid 1368239 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1368239 00:18:56.945 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1368239 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:57.204 00:18:57.204 real 0m53.887s 00:18:57.204 user 3m28.021s 00:18:57.204 sys 0m3.954s 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:57.204 ************************************ 00:18:57.204 END TEST nvmf_vfio_user 00:18:57.204 ************************************ 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.204 ************************************ 00:18:57.204 START TEST nvmf_vfio_user_nvme_compliance 00:18:57.204 ************************************ 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:57.204 * Looking for test storage... 00:18:57.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:57.204 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.465 --rc genhtml_branch_coverage=1 00:18:57.465 --rc genhtml_function_coverage=1 00:18:57.465 --rc genhtml_legend=1 00:18:57.465 --rc geninfo_all_blocks=1 00:18:57.465 --rc geninfo_unexecuted_blocks=1 00:18:57.465 00:18:57.465 ' 00:18:57.465 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.465 --rc genhtml_branch_coverage=1 00:18:57.466 --rc genhtml_function_coverage=1 00:18:57.466 --rc genhtml_legend=1 00:18:57.466 --rc geninfo_all_blocks=1 00:18:57.466 --rc geninfo_unexecuted_blocks=1 00:18:57.466 00:18:57.466 ' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.466 --rc genhtml_branch_coverage=1 00:18:57.466 --rc genhtml_function_coverage=1 00:18:57.466 --rc genhtml_legend=1 00:18:57.466 --rc geninfo_all_blocks=1 00:18:57.466 --rc geninfo_unexecuted_blocks=1 00:18:57.466 00:18:57.466 ' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.466 --rc genhtml_branch_coverage=1 00:18:57.466 --rc genhtml_function_coverage=1 00:18:57.466 --rc genhtml_legend=1 00:18:57.466 --rc geninfo_all_blocks=1 00:18:57.466 --rc geninfo_unexecuted_blocks=1 00:18:57.466 00:18:57.466 ' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1368795 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1368795' 00:18:57.466 Process pid: 1368795 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1368795 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1368795 ']' 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.466 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.466 [2024-11-02 14:34:49.364663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:57.466 [2024-11-02 14:34:49.364759] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.466 [2024-11-02 14:34:49.426035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:57.466 [2024-11-02 14:34:49.516834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.466 [2024-11-02 14:34:49.516896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.466 [2024-11-02 14:34:49.516925] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.467 [2024-11-02 14:34:49.516936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.467 [2024-11-02 14:34:49.516946] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.467 [2024-11-02 14:34:49.517023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.467 [2024-11-02 14:34:49.517054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.467 [2024-11-02 14:34:49.517057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.742 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.742 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:57.742 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:58.682 malloc0 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.682 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:58.942 00:18:58.942 00:18:58.942 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.942 http://cunit.sourceforge.net/ 00:18:58.942 00:18:58.942 00:18:58.942 Suite: nvme_compliance 00:18:58.942 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-02 14:34:50.861912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.942 [2024-11-02 14:34:50.863455] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:58.942 [2024-11-02 14:34:50.863479] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:58.942 [2024-11-02 14:34:50.863493] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:58.942 [2024-11-02 14:34:50.864939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.942 passed 00:18:58.942 Test: admin_identify_ctrlr_verify_fused ...[2024-11-02 14:34:50.951557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.942 [2024-11-02 14:34:50.954574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.942 passed 00:18:59.203 Test: admin_identify_ns ...[2024-11-02 14:34:51.042784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.203 [2024-11-02 14:34:51.102289] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:59.203 [2024-11-02 14:34:51.110291] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:59.203 [2024-11-02 14:34:51.131416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.203 passed 00:18:59.203 Test: admin_get_features_mandatory_features ...[2024-11-02 14:34:51.214152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.203 [2024-11-02 14:34:51.217172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.203 passed 00:18:59.463 Test: admin_get_features_optional_features ...[2024-11-02 14:34:51.301776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.463 [2024-11-02 14:34:51.304794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.463 passed 00:18:59.464 Test: admin_set_features_number_of_queues ...[2024-11-02 14:34:51.390834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.464 [2024-11-02 14:34:51.496385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.722 passed 00:18:59.722 Test: admin_get_log_page_mandatory_logs ...[2024-11-02 14:34:51.580095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.722 [2024-11-02 14:34:51.583121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.722 passed 00:18:59.722 Test: admin_get_log_page_with_lpo ...[2024-11-02 14:34:51.663339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.722 [2024-11-02 14:34:51.733292] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:59.722 [2024-11-02 14:34:51.746330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.981 passed 00:18:59.981 Test: fabric_property_get ...[2024-11-02 14:34:51.829996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.981 [2024-11-02 14:34:51.831291] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:59.981 [2024-11-02 14:34:51.833018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.981 passed 00:18:59.981 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-02 14:34:51.914598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.981 [2024-11-02 14:34:51.915901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:59.981 [2024-11-02 14:34:51.917623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.981 passed 00:18:59.981 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-02 14:34:52.001791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.241 [2024-11-02 14:34:52.085271] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:00.241 [2024-11-02 14:34:52.101280] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:00.241 [2024-11-02 14:34:52.106386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.241 passed 00:19:00.241 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-02 14:34:52.189504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.241 [2024-11-02 14:34:52.190830] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:00.241 [2024-11-02 14:34:52.192526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.241 passed 00:19:00.241 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-02 14:34:52.275793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.500 [2024-11-02 14:34:52.351268] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:00.500 [2024-11-02 14:34:52.375266] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:00.500 [2024-11-02 14:34:52.380381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.500 passed 00:19:00.500 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-02 14:34:52.463514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.500 [2024-11-02 14:34:52.464835] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:00.500 [2024-11-02 14:34:52.464876] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:00.500 [2024-11-02 14:34:52.466534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.500 passed 00:19:00.500 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-02 14:34:52.552733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.760 [2024-11-02 14:34:52.643268] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:00.760 [2024-11-02 14:34:52.651268] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:00.760 [2024-11-02 14:34:52.659280] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:00.760 [2024-11-02 14:34:52.667282] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:00.760 [2024-11-02 14:34:52.696381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.760 passed 00:19:00.760 Test: admin_create_io_sq_verify_pc ...[2024-11-02 14:34:52.779878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.760 [2024-11-02 14:34:52.796281] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:00.760 [2024-11-02 14:34:52.814220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.020 passed 00:19:01.020 Test: admin_create_io_qp_max_qps ...[2024-11-02 14:34:52.898818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.400 [2024-11-02 14:34:54.025274] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:02.400 [2024-11-02 14:34:54.399164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.400 passed 00:19:02.659 Test: admin_create_io_sq_shared_cq ...[2024-11-02 14:34:54.483476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.659 [2024-11-02 14:34:54.615278] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:02.659 [2024-11-02 14:34:54.652360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.659 passed 00:19:02.659 00:19:02.659 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.659 suites 1 1 n/a 0 0 00:19:02.659 tests 18 18 18 0 0 00:19:02.659 asserts 360 360 360 0 n/a 00:19:02.659 00:19:02.659 Elapsed time = 1.572 seconds 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1368795 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1368795 ']' 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1368795 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.659 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1368795 00:19:02.918 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.918 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.918 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1368795' 00:19:02.918 killing process with pid 1368795 00:19:02.918 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1368795 00:19:02.918 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1368795 00:19:03.177 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:03.177 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:03.177 00:19:03.177 real 0m5.863s 00:19:03.177 user 0m16.313s 00:19:03.177 sys 0m0.580s 00:19:03.177 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.177 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.177 ************************************ 00:19:03.177 END TEST nvmf_vfio_user_nvme_compliance 00:19:03.177 ************************************ 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.177 ************************************ 00:19:03.177 START TEST nvmf_vfio_user_fuzz 00:19:03.177 ************************************ 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:03.177 * Looking for test storage... 00:19:03.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:03.177 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:03.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.178 --rc genhtml_branch_coverage=1 00:19:03.178 --rc genhtml_function_coverage=1 00:19:03.178 --rc genhtml_legend=1 00:19:03.178 --rc geninfo_all_blocks=1 00:19:03.178 --rc geninfo_unexecuted_blocks=1 00:19:03.178 00:19:03.178 ' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:03.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.178 --rc genhtml_branch_coverage=1 00:19:03.178 --rc genhtml_function_coverage=1 00:19:03.178 --rc genhtml_legend=1 00:19:03.178 --rc geninfo_all_blocks=1 00:19:03.178 --rc geninfo_unexecuted_blocks=1 00:19:03.178 00:19:03.178 ' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:03.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.178 --rc genhtml_branch_coverage=1 00:19:03.178 --rc genhtml_function_coverage=1 00:19:03.178 --rc genhtml_legend=1 00:19:03.178 --rc geninfo_all_blocks=1 00:19:03.178 --rc geninfo_unexecuted_blocks=1 00:19:03.178 00:19:03.178 ' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:03.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.178 --rc genhtml_branch_coverage=1 00:19:03.178 --rc genhtml_function_coverage=1 00:19:03.178 --rc genhtml_legend=1 00:19:03.178 --rc geninfo_all_blocks=1 00:19:03.178 --rc geninfo_unexecuted_blocks=1 00:19:03.178 00:19:03.178 ' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:03.178 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1369630 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1369630' 00:19:03.179 Process pid: 1369630 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1369630 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1369630 ']' 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.179 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.749 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.749 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:03.749 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.778 malloc0 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:04.778 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:36.864 Fuzzing completed. Shutting down the fuzz application 00:19:36.864 00:19:36.864 Dumping successful admin opcodes: 00:19:36.864 8, 9, 10, 24, 00:19:36.864 Dumping successful io opcodes: 00:19:36.864 0, 00:19:36.864 NS: 0x200003a1ef00 I/O qp, Total commands completed: 695974, total successful commands: 2711, random_seed: 3977555904 00:19:36.864 NS: 0x200003a1ef00 admin qp, Total commands completed: 88434, total successful commands: 709, random_seed: 435879424 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1369630 ']' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1369630' 00:19:36.864 killing process with pid 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1369630 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:36.864 00:19:36.864 real 0m32.373s 00:19:36.864 user 0m34.603s 00:19:36.864 sys 0m26.114s 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.864 ************************************ 00:19:36.864 END TEST nvmf_vfio_user_fuzz 00:19:36.864 ************************************ 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.864 ************************************ 00:19:36.864 START TEST nvmf_auth_target 00:19:36.864 ************************************ 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:36.864 * Looking for test storage... 00:19:36.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.864 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:36.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.864 --rc genhtml_branch_coverage=1 00:19:36.864 --rc genhtml_function_coverage=1 00:19:36.865 --rc genhtml_legend=1 00:19:36.865 --rc geninfo_all_blocks=1 00:19:36.865 --rc geninfo_unexecuted_blocks=1 00:19:36.865 00:19:36.865 ' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:36.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.865 --rc genhtml_branch_coverage=1 00:19:36.865 --rc genhtml_function_coverage=1 00:19:36.865 --rc genhtml_legend=1 00:19:36.865 --rc geninfo_all_blocks=1 00:19:36.865 --rc geninfo_unexecuted_blocks=1 00:19:36.865 00:19:36.865 ' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:36.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.865 --rc genhtml_branch_coverage=1 00:19:36.865 --rc genhtml_function_coverage=1 00:19:36.865 --rc genhtml_legend=1 00:19:36.865 --rc geninfo_all_blocks=1 00:19:36.865 --rc geninfo_unexecuted_blocks=1 00:19:36.865 00:19:36.865 ' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:36.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.865 --rc genhtml_branch_coverage=1 00:19:36.865 --rc genhtml_function_coverage=1 00:19:36.865 --rc genhtml_legend=1 00:19:36.865 --rc geninfo_all_blocks=1 00:19:36.865 --rc geninfo_unexecuted_blocks=1 00:19:36.865 00:19:36.865 ' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:36.865 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.802 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:37.802 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.803 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:19:37.803 00:19:37.803 --- 10.0.0.2 ping statistics --- 00:19:37.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.803 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:19:37.803 00:19:37.803 --- 10.0.0.1 ping statistics --- 00:19:37.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.803 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:37.803 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=1374972 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 1374972 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1374972 ']' 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.062 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1375078 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=df8f11c1827cf6d1e2c3ea3d2f6b9e86ab25ed0583cc50c5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.6D5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key df8f11c1827cf6d1e2c3ea3d2f6b9e86ab25ed0583cc50c5 0 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 df8f11c1827cf6d1e2c3ea3d2f6b9e86ab25ed0583cc50c5 0 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=df8f11c1827cf6d1e2c3ea3d2f6b9e86ab25ed0583cc50c5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.6D5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.6D5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.6D5 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=864e4028e3df59805855c11f2d57229ac15cda5b487244bf521e223b5328bc6b 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.8Rn 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 864e4028e3df59805855c11f2d57229ac15cda5b487244bf521e223b5328bc6b 3 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 864e4028e3df59805855c11f2d57229ac15cda5b487244bf521e223b5328bc6b 3 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=864e4028e3df59805855c11f2d57229ac15cda5b487244bf521e223b5328bc6b 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.8Rn 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.8Rn 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.8Rn 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=76ac5900611b1d23950016ff35df07f1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.iQt 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 76ac5900611b1d23950016ff35df07f1 1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 76ac5900611b1d23950016ff35df07f1 1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=76ac5900611b1d23950016ff35df07f1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:38.321 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.iQt 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.iQt 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.iQt 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bcee584171db4be033b4ef51f880fee16737ed592888041b 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.gkB 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bcee584171db4be033b4ef51f880fee16737ed592888041b 2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bcee584171db4be033b4ef51f880fee16737ed592888041b 2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bcee584171db4be033b4ef51f880fee16737ed592888041b 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.gkB 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.gkB 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.gkB 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6d4c8b5047ab2968d536e7fd6eb4b7edc62ab05070a4f322 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.IbM 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6d4c8b5047ab2968d536e7fd6eb4b7edc62ab05070a4f322 2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6d4c8b5047ab2968d536e7fd6eb4b7edc62ab05070a4f322 2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6d4c8b5047ab2968d536e7fd6eb4b7edc62ab05070a4f322 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.581 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.IbM 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.IbM 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.IbM 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f76ce250fa3b0df713d40aa3203070bf 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.J81 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f76ce250fa3b0df713d40aa3203070bf 1 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f76ce250fa3b0df713d40aa3203070bf 1 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f76ce250fa3b0df713d40aa3203070bf 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.J81 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.J81 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.J81 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6b4a6b284376569f228469efada9ffbb92855e0dae7ca5ba93d5dc96e1989471 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.JEp 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6b4a6b284376569f228469efada9ffbb92855e0dae7ca5ba93d5dc96e1989471 3 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6b4a6b284376569f228469efada9ffbb92855e0dae7ca5ba93d5dc96e1989471 3 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6b4a6b284376569f228469efada9ffbb92855e0dae7ca5ba93d5dc96e1989471 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.JEp 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.JEp 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.JEp 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1374972 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1374972 ']' 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.582 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1375078 /var/tmp/host.sock 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1375078 ']' 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:38.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.840 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6D5 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.100 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.358 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.358 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6D5 00:19:39.358 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6D5 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.8Rn ]] 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Rn 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Rn 00:19:39.616 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Rn 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iQt 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iQt 00:19:39.873 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iQt 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.gkB ]] 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gkB 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gkB 00:19:40.130 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gkB 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.IbM 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.IbM 00:19:40.388 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.IbM 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.J81 ]] 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J81 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J81 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J81 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JEp 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JEp 00:19:40.955 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JEp 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.213 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.472 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:41.472 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.472 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.472 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.473 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.042 00:19:42.042 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.042 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.042 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.300 { 00:19:42.300 "cntlid": 1, 00:19:42.300 "qid": 0, 00:19:42.300 "state": "enabled", 00:19:42.300 "thread": "nvmf_tgt_poll_group_000", 00:19:42.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.300 "listen_address": { 00:19:42.300 "trtype": "TCP", 00:19:42.300 "adrfam": "IPv4", 00:19:42.300 "traddr": "10.0.0.2", 00:19:42.300 "trsvcid": "4420" 00:19:42.300 }, 00:19:42.300 "peer_address": { 00:19:42.300 "trtype": "TCP", 00:19:42.300 "adrfam": "IPv4", 00:19:42.300 "traddr": "10.0.0.1", 00:19:42.300 "trsvcid": "51434" 00:19:42.300 }, 00:19:42.300 "auth": { 00:19:42.300 "state": "completed", 00:19:42.300 "digest": "sha256", 00:19:42.300 "dhgroup": "null" 00:19:42.300 } 00:19:42.300 } 00:19:42.300 ]' 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.300 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.559 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:19:42.559 14:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.497 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.065 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.323 00:19:44.323 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.323 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.323 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.582 { 00:19:44.582 "cntlid": 3, 00:19:44.582 "qid": 0, 00:19:44.582 "state": "enabled", 00:19:44.582 "thread": "nvmf_tgt_poll_group_000", 00:19:44.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.582 "listen_address": { 00:19:44.582 "trtype": "TCP", 00:19:44.582 "adrfam": "IPv4", 00:19:44.582 "traddr": "10.0.0.2", 00:19:44.582 "trsvcid": "4420" 00:19:44.582 }, 00:19:44.582 "peer_address": { 00:19:44.582 "trtype": "TCP", 00:19:44.582 "adrfam": "IPv4", 00:19:44.582 "traddr": "10.0.0.1", 00:19:44.582 "trsvcid": "35298" 00:19:44.582 }, 00:19:44.582 "auth": { 00:19:44.582 "state": "completed", 00:19:44.582 "digest": "sha256", 00:19:44.582 "dhgroup": "null" 00:19:44.582 } 00:19:44.582 } 00:19:44.582 ]' 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.582 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.841 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:19:44.841 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.778 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.346 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.605 00:19:46.605 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.605 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.605 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.864 { 00:19:46.864 "cntlid": 5, 00:19:46.864 "qid": 0, 00:19:46.864 "state": "enabled", 00:19:46.864 "thread": "nvmf_tgt_poll_group_000", 00:19:46.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.864 "listen_address": { 00:19:46.864 "trtype": "TCP", 00:19:46.864 "adrfam": "IPv4", 00:19:46.864 "traddr": "10.0.0.2", 00:19:46.864 "trsvcid": "4420" 00:19:46.864 }, 00:19:46.864 "peer_address": { 00:19:46.864 "trtype": "TCP", 00:19:46.864 "adrfam": "IPv4", 00:19:46.864 "traddr": "10.0.0.1", 00:19:46.864 "trsvcid": "35332" 00:19:46.864 }, 00:19:46.864 "auth": { 00:19:46.864 "state": "completed", 00:19:46.864 "digest": "sha256", 00:19:46.864 "dhgroup": "null" 00:19:46.864 } 00:19:46.864 } 00:19:46.864 ]' 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.864 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.123 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.123 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.123 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.383 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:19:47.383 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.320 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.578 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.837 00:19:48.837 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.837 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.837 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.095 { 00:19:49.095 "cntlid": 7, 00:19:49.095 "qid": 0, 00:19:49.095 "state": "enabled", 00:19:49.095 "thread": "nvmf_tgt_poll_group_000", 00:19:49.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.095 "listen_address": { 00:19:49.095 "trtype": "TCP", 00:19:49.095 "adrfam": "IPv4", 00:19:49.095 "traddr": "10.0.0.2", 00:19:49.095 "trsvcid": "4420" 00:19:49.095 }, 00:19:49.095 "peer_address": { 00:19:49.095 "trtype": "TCP", 00:19:49.095 "adrfam": "IPv4", 00:19:49.095 "traddr": "10.0.0.1", 00:19:49.095 "trsvcid": "35366" 00:19:49.095 }, 00:19:49.095 "auth": { 00:19:49.095 "state": "completed", 00:19:49.095 "digest": "sha256", 00:19:49.095 "dhgroup": "null" 00:19:49.095 } 00:19:49.095 } 00:19:49.095 ]' 00:19:49.095 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.353 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.612 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:19:49.612 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.546 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.117 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.376 00:19:51.376 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.376 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.376 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.635 { 00:19:51.635 "cntlid": 9, 00:19:51.635 "qid": 0, 00:19:51.635 "state": "enabled", 00:19:51.635 "thread": "nvmf_tgt_poll_group_000", 00:19:51.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.635 "listen_address": { 00:19:51.635 "trtype": "TCP", 00:19:51.635 "adrfam": "IPv4", 00:19:51.635 "traddr": "10.0.0.2", 00:19:51.635 "trsvcid": "4420" 00:19:51.635 }, 00:19:51.635 "peer_address": { 00:19:51.635 "trtype": "TCP", 00:19:51.635 "adrfam": "IPv4", 00:19:51.635 "traddr": "10.0.0.1", 00:19:51.635 "trsvcid": "35390" 00:19:51.635 }, 00:19:51.635 "auth": { 00:19:51.635 "state": "completed", 00:19:51.635 "digest": "sha256", 00:19:51.635 "dhgroup": "ffdhe2048" 00:19:51.635 } 00:19:51.635 } 00:19:51.635 ]' 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.635 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.894 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:19:51.894 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.829 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.399 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.658 00:19:53.658 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.658 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.658 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.916 { 00:19:53.916 "cntlid": 11, 00:19:53.916 "qid": 0, 00:19:53.916 "state": "enabled", 00:19:53.916 "thread": "nvmf_tgt_poll_group_000", 00:19:53.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.916 "listen_address": { 00:19:53.916 "trtype": "TCP", 00:19:53.916 "adrfam": "IPv4", 00:19:53.916 "traddr": "10.0.0.2", 00:19:53.916 "trsvcid": "4420" 00:19:53.916 }, 00:19:53.916 "peer_address": { 00:19:53.916 "trtype": "TCP", 00:19:53.916 "adrfam": "IPv4", 00:19:53.916 "traddr": "10.0.0.1", 00:19:53.916 "trsvcid": "50412" 00:19:53.916 }, 00:19:53.916 "auth": { 00:19:53.916 "state": "completed", 00:19:53.916 "digest": "sha256", 00:19:53.916 "dhgroup": "ffdhe2048" 00:19:53.916 } 00:19:53.916 } 00:19:53.916 ]' 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.916 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.176 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:19:54.176 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.552 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.812 00:19:56.071 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.071 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.071 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.330 { 00:19:56.330 "cntlid": 13, 00:19:56.330 "qid": 0, 00:19:56.330 "state": "enabled", 00:19:56.330 "thread": "nvmf_tgt_poll_group_000", 00:19:56.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.330 "listen_address": { 00:19:56.330 "trtype": "TCP", 00:19:56.330 "adrfam": "IPv4", 00:19:56.330 "traddr": "10.0.0.2", 00:19:56.330 "trsvcid": "4420" 00:19:56.330 }, 00:19:56.330 "peer_address": { 00:19:56.330 "trtype": "TCP", 00:19:56.330 "adrfam": "IPv4", 00:19:56.330 "traddr": "10.0.0.1", 00:19:56.330 "trsvcid": "50440" 00:19:56.330 }, 00:19:56.330 "auth": { 00:19:56.330 "state": "completed", 00:19:56.330 "digest": "sha256", 00:19:56.330 "dhgroup": "ffdhe2048" 00:19:56.330 } 00:19:56.330 } 00:19:56.330 ]' 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.330 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.589 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:19:56.589 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.526 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.787 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.360 00:19:58.360 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.360 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.360 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.618 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.619 { 00:19:58.619 "cntlid": 15, 00:19:58.619 "qid": 0, 00:19:58.619 "state": "enabled", 00:19:58.619 "thread": "nvmf_tgt_poll_group_000", 00:19:58.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.619 "listen_address": { 00:19:58.619 "trtype": "TCP", 00:19:58.619 "adrfam": "IPv4", 00:19:58.619 "traddr": "10.0.0.2", 00:19:58.619 "trsvcid": "4420" 00:19:58.619 }, 00:19:58.619 "peer_address": { 00:19:58.619 "trtype": "TCP", 00:19:58.619 "adrfam": "IPv4", 00:19:58.619 "traddr": "10.0.0.1", 00:19:58.619 "trsvcid": "50468" 00:19:58.619 }, 00:19:58.619 "auth": { 00:19:58.619 "state": "completed", 00:19:58.619 "digest": "sha256", 00:19:58.619 "dhgroup": "ffdhe2048" 00:19:58.619 } 00:19:58.619 } 00:19:58.619 ]' 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.619 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.879 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:19:58.879 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.821 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.389 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.648 00:20:00.648 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.648 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.648 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.906 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.906 { 00:20:00.906 "cntlid": 17, 00:20:00.906 "qid": 0, 00:20:00.906 "state": "enabled", 00:20:00.906 "thread": "nvmf_tgt_poll_group_000", 00:20:00.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.906 "listen_address": { 00:20:00.906 "trtype": "TCP", 00:20:00.907 "adrfam": "IPv4", 00:20:00.907 "traddr": "10.0.0.2", 00:20:00.907 "trsvcid": "4420" 00:20:00.907 }, 00:20:00.907 "peer_address": { 00:20:00.907 "trtype": "TCP", 00:20:00.907 "adrfam": "IPv4", 00:20:00.907 "traddr": "10.0.0.1", 00:20:00.907 "trsvcid": "50486" 00:20:00.907 }, 00:20:00.907 "auth": { 00:20:00.907 "state": "completed", 00:20:00.907 "digest": "sha256", 00:20:00.907 "dhgroup": "ffdhe3072" 00:20:00.907 } 00:20:00.907 } 00:20:00.907 ]' 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.907 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.169 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:01.169 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:02.158 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.417 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.676 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.934 00:20:02.934 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.934 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.934 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.193 { 00:20:03.193 "cntlid": 19, 00:20:03.193 "qid": 0, 00:20:03.193 "state": "enabled", 00:20:03.193 "thread": "nvmf_tgt_poll_group_000", 00:20:03.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.193 "listen_address": { 00:20:03.193 "trtype": "TCP", 00:20:03.193 "adrfam": "IPv4", 00:20:03.193 "traddr": "10.0.0.2", 00:20:03.193 "trsvcid": "4420" 00:20:03.193 }, 00:20:03.193 "peer_address": { 00:20:03.193 "trtype": "TCP", 00:20:03.193 "adrfam": "IPv4", 00:20:03.193 "traddr": "10.0.0.1", 00:20:03.193 "trsvcid": "50518" 00:20:03.193 }, 00:20:03.193 "auth": { 00:20:03.193 "state": "completed", 00:20:03.193 "digest": "sha256", 00:20:03.193 "dhgroup": "ffdhe3072" 00:20:03.193 } 00:20:03.193 } 00:20:03.193 ]' 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.193 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.710 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:03.710 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.650 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.908 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.479 00:20:05.479 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.479 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.479 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.738 { 00:20:05.738 "cntlid": 21, 00:20:05.738 "qid": 0, 00:20:05.738 "state": "enabled", 00:20:05.738 "thread": "nvmf_tgt_poll_group_000", 00:20:05.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.738 "listen_address": { 00:20:05.738 "trtype": "TCP", 00:20:05.738 "adrfam": "IPv4", 00:20:05.738 "traddr": "10.0.0.2", 00:20:05.738 "trsvcid": "4420" 00:20:05.738 }, 00:20:05.738 "peer_address": { 00:20:05.738 "trtype": "TCP", 00:20:05.738 "adrfam": "IPv4", 00:20:05.738 "traddr": "10.0.0.1", 00:20:05.738 "trsvcid": "45662" 00:20:05.738 }, 00:20:05.738 "auth": { 00:20:05.738 "state": "completed", 00:20:05.738 "digest": "sha256", 00:20:05.738 "dhgroup": "ffdhe3072" 00:20:05.738 } 00:20:05.738 } 00:20:05.738 ]' 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.738 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.998 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:05.998 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.938 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.197 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.764 00:20:07.764 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.764 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.764 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.022 { 00:20:08.022 "cntlid": 23, 00:20:08.022 "qid": 0, 00:20:08.022 "state": "enabled", 00:20:08.022 "thread": "nvmf_tgt_poll_group_000", 00:20:08.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.022 "listen_address": { 00:20:08.022 "trtype": "TCP", 00:20:08.022 "adrfam": "IPv4", 00:20:08.022 "traddr": "10.0.0.2", 00:20:08.022 "trsvcid": "4420" 00:20:08.022 }, 00:20:08.022 "peer_address": { 00:20:08.022 "trtype": "TCP", 00:20:08.022 "adrfam": "IPv4", 00:20:08.022 "traddr": "10.0.0.1", 00:20:08.022 "trsvcid": "45700" 00:20:08.022 }, 00:20:08.022 "auth": { 00:20:08.022 "state": "completed", 00:20:08.022 "digest": "sha256", 00:20:08.022 "dhgroup": "ffdhe3072" 00:20:08.022 } 00:20:08.022 } 00:20:08.022 ]' 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.022 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.280 14:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:08.280 14:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:09.214 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.214 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.214 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.215 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.781 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.039 00:20:10.039 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.039 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.039 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.297 { 00:20:10.297 "cntlid": 25, 00:20:10.297 "qid": 0, 00:20:10.297 "state": "enabled", 00:20:10.297 "thread": "nvmf_tgt_poll_group_000", 00:20:10.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.297 "listen_address": { 00:20:10.297 "trtype": "TCP", 00:20:10.297 "adrfam": "IPv4", 00:20:10.297 "traddr": "10.0.0.2", 00:20:10.297 "trsvcid": "4420" 00:20:10.297 }, 00:20:10.297 "peer_address": { 00:20:10.297 "trtype": "TCP", 00:20:10.297 "adrfam": "IPv4", 00:20:10.297 "traddr": "10.0.0.1", 00:20:10.297 "trsvcid": "45730" 00:20:10.297 }, 00:20:10.297 "auth": { 00:20:10.297 "state": "completed", 00:20:10.297 "digest": "sha256", 00:20:10.297 "dhgroup": "ffdhe4096" 00:20:10.297 } 00:20:10.297 } 00:20:10.297 ]' 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.297 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.555 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.555 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.555 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.813 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:10.813 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.749 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.008 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.578 00:20:12.578 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.578 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.578 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.836 { 00:20:12.836 "cntlid": 27, 00:20:12.836 "qid": 0, 00:20:12.836 "state": "enabled", 00:20:12.836 "thread": "nvmf_tgt_poll_group_000", 00:20:12.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.836 "listen_address": { 00:20:12.836 "trtype": "TCP", 00:20:12.836 "adrfam": "IPv4", 00:20:12.836 "traddr": "10.0.0.2", 00:20:12.836 "trsvcid": "4420" 00:20:12.836 }, 00:20:12.836 "peer_address": { 00:20:12.836 "trtype": "TCP", 00:20:12.836 "adrfam": "IPv4", 00:20:12.836 "traddr": "10.0.0.1", 00:20:12.836 "trsvcid": "45742" 00:20:12.836 }, 00:20:12.836 "auth": { 00:20:12.836 "state": "completed", 00:20:12.836 "digest": "sha256", 00:20:12.836 "dhgroup": "ffdhe4096" 00:20:12.836 } 00:20:12.836 } 00:20:12.836 ]' 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.836 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.094 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:13.095 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.043 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.301 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.868 00:20:14.868 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.868 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.868 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.127 { 00:20:15.127 "cntlid": 29, 00:20:15.127 "qid": 0, 00:20:15.127 "state": "enabled", 00:20:15.127 "thread": "nvmf_tgt_poll_group_000", 00:20:15.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.127 "listen_address": { 00:20:15.127 "trtype": "TCP", 00:20:15.127 "adrfam": "IPv4", 00:20:15.127 "traddr": "10.0.0.2", 00:20:15.127 "trsvcid": "4420" 00:20:15.127 }, 00:20:15.127 "peer_address": { 00:20:15.127 "trtype": "TCP", 00:20:15.127 "adrfam": "IPv4", 00:20:15.127 "traddr": "10.0.0.1", 00:20:15.127 "trsvcid": "54910" 00:20:15.127 }, 00:20:15.127 "auth": { 00:20:15.127 "state": "completed", 00:20:15.127 "digest": "sha256", 00:20:15.127 "dhgroup": "ffdhe4096" 00:20:15.127 } 00:20:15.127 } 00:20:15.127 ]' 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.127 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.694 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:15.694 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:16.630 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.630 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.630 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.630 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.631 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.889 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.889 14:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.147 00:20:17.147 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.147 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.147 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.406 { 00:20:17.406 "cntlid": 31, 00:20:17.406 "qid": 0, 00:20:17.406 "state": "enabled", 00:20:17.406 "thread": "nvmf_tgt_poll_group_000", 00:20:17.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.406 "listen_address": { 00:20:17.406 "trtype": "TCP", 00:20:17.406 "adrfam": "IPv4", 00:20:17.406 "traddr": "10.0.0.2", 00:20:17.406 "trsvcid": "4420" 00:20:17.406 }, 00:20:17.406 "peer_address": { 00:20:17.406 "trtype": "TCP", 00:20:17.406 "adrfam": "IPv4", 00:20:17.406 "traddr": "10.0.0.1", 00:20:17.406 "trsvcid": "54946" 00:20:17.406 }, 00:20:17.406 "auth": { 00:20:17.406 "state": "completed", 00:20:17.406 "digest": "sha256", 00:20:17.406 "dhgroup": "ffdhe4096" 00:20:17.406 } 00:20:17.406 } 00:20:17.406 ]' 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.406 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.664 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.664 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.664 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.664 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.664 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.922 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:17.922 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.859 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.117 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.682 00:20:19.682 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.682 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.682 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.940 { 00:20:19.940 "cntlid": 33, 00:20:19.940 "qid": 0, 00:20:19.940 "state": "enabled", 00:20:19.940 "thread": "nvmf_tgt_poll_group_000", 00:20:19.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.940 "listen_address": { 00:20:19.940 "trtype": "TCP", 00:20:19.940 "adrfam": "IPv4", 00:20:19.940 "traddr": "10.0.0.2", 00:20:19.940 "trsvcid": "4420" 00:20:19.940 }, 00:20:19.940 "peer_address": { 00:20:19.940 "trtype": "TCP", 00:20:19.940 "adrfam": "IPv4", 00:20:19.940 "traddr": "10.0.0.1", 00:20:19.940 "trsvcid": "54980" 00:20:19.940 }, 00:20:19.940 "auth": { 00:20:19.940 "state": "completed", 00:20:19.940 "digest": "sha256", 00:20:19.940 "dhgroup": "ffdhe6144" 00:20:19.940 } 00:20:19.940 } 00:20:19.940 ]' 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.940 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.941 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.199 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.199 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.199 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.199 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.199 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.456 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:20.456 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.391 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.649 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.217 00:20:22.217 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.217 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.217 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.476 { 00:20:22.476 "cntlid": 35, 00:20:22.476 "qid": 0, 00:20:22.476 "state": "enabled", 00:20:22.476 "thread": "nvmf_tgt_poll_group_000", 00:20:22.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.476 "listen_address": { 00:20:22.476 "trtype": "TCP", 00:20:22.476 "adrfam": "IPv4", 00:20:22.476 "traddr": "10.0.0.2", 00:20:22.476 "trsvcid": "4420" 00:20:22.476 }, 00:20:22.476 "peer_address": { 00:20:22.476 "trtype": "TCP", 00:20:22.476 "adrfam": "IPv4", 00:20:22.476 "traddr": "10.0.0.1", 00:20:22.476 "trsvcid": "55006" 00:20:22.476 }, 00:20:22.476 "auth": { 00:20:22.476 "state": "completed", 00:20:22.476 "digest": "sha256", 00:20:22.476 "dhgroup": "ffdhe6144" 00:20:22.476 } 00:20:22.476 } 00:20:22.476 ]' 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.476 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.734 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.734 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.734 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.994 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:22.994 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.933 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.191 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.758 00:20:24.758 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.758 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.758 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.016 { 00:20:25.016 "cntlid": 37, 00:20:25.016 "qid": 0, 00:20:25.016 "state": "enabled", 00:20:25.016 "thread": "nvmf_tgt_poll_group_000", 00:20:25.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.016 "listen_address": { 00:20:25.016 "trtype": "TCP", 00:20:25.016 "adrfam": "IPv4", 00:20:25.016 "traddr": "10.0.0.2", 00:20:25.016 "trsvcid": "4420" 00:20:25.016 }, 00:20:25.016 "peer_address": { 00:20:25.016 "trtype": "TCP", 00:20:25.016 "adrfam": "IPv4", 00:20:25.016 "traddr": "10.0.0.1", 00:20:25.016 "trsvcid": "36258" 00:20:25.016 }, 00:20:25.016 "auth": { 00:20:25.016 "state": "completed", 00:20:25.016 "digest": "sha256", 00:20:25.016 "dhgroup": "ffdhe6144" 00:20:25.016 } 00:20:25.016 } 00:20:25.016 ]' 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.016 14:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.016 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.016 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.016 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.275 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:25.275 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.652 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.221 00:20:27.221 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.221 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.221 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.479 { 00:20:27.479 "cntlid": 39, 00:20:27.479 "qid": 0, 00:20:27.479 "state": "enabled", 00:20:27.479 "thread": "nvmf_tgt_poll_group_000", 00:20:27.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.479 "listen_address": { 00:20:27.479 "trtype": "TCP", 00:20:27.479 "adrfam": "IPv4", 00:20:27.479 "traddr": "10.0.0.2", 00:20:27.479 "trsvcid": "4420" 00:20:27.479 }, 00:20:27.479 "peer_address": { 00:20:27.479 "trtype": "TCP", 00:20:27.479 "adrfam": "IPv4", 00:20:27.479 "traddr": "10.0.0.1", 00:20:27.479 "trsvcid": "36290" 00:20:27.479 }, 00:20:27.479 "auth": { 00:20:27.479 "state": "completed", 00:20:27.479 "digest": "sha256", 00:20:27.479 "dhgroup": "ffdhe6144" 00:20:27.479 } 00:20:27.479 } 00:20:27.479 ]' 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.479 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.741 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.741 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.741 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.010 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:28.010 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.944 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.203 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.142 00:20:30.142 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.142 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.142 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.400 { 00:20:30.400 "cntlid": 41, 00:20:30.400 "qid": 0, 00:20:30.400 "state": "enabled", 00:20:30.400 "thread": "nvmf_tgt_poll_group_000", 00:20:30.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.400 "listen_address": { 00:20:30.400 "trtype": "TCP", 00:20:30.400 "adrfam": "IPv4", 00:20:30.400 "traddr": "10.0.0.2", 00:20:30.400 "trsvcid": "4420" 00:20:30.400 }, 00:20:30.400 "peer_address": { 00:20:30.400 "trtype": "TCP", 00:20:30.400 "adrfam": "IPv4", 00:20:30.400 "traddr": "10.0.0.1", 00:20:30.400 "trsvcid": "36318" 00:20:30.400 }, 00:20:30.400 "auth": { 00:20:30.400 "state": "completed", 00:20:30.400 "digest": "sha256", 00:20:30.400 "dhgroup": "ffdhe8192" 00:20:30.400 } 00:20:30.400 } 00:20:30.400 ]' 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.400 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.966 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:30.966 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.941 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.220 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.788 00:20:33.046 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.046 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.046 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.304 { 00:20:33.304 "cntlid": 43, 00:20:33.304 "qid": 0, 00:20:33.304 "state": "enabled", 00:20:33.304 "thread": "nvmf_tgt_poll_group_000", 00:20:33.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.304 "listen_address": { 00:20:33.304 "trtype": "TCP", 00:20:33.304 "adrfam": "IPv4", 00:20:33.304 "traddr": "10.0.0.2", 00:20:33.304 "trsvcid": "4420" 00:20:33.304 }, 00:20:33.304 "peer_address": { 00:20:33.304 "trtype": "TCP", 00:20:33.304 "adrfam": "IPv4", 00:20:33.304 "traddr": "10.0.0.1", 00:20:33.304 "trsvcid": "36356" 00:20:33.304 }, 00:20:33.304 "auth": { 00:20:33.304 "state": "completed", 00:20:33.304 "digest": "sha256", 00:20:33.304 "dhgroup": "ffdhe8192" 00:20:33.304 } 00:20:33.304 } 00:20:33.304 ]' 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.304 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.562 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:33.562 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.938 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.939 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.876 00:20:35.876 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.876 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.876 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.134 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.134 { 00:20:36.134 "cntlid": 45, 00:20:36.134 "qid": 0, 00:20:36.134 "state": "enabled", 00:20:36.134 "thread": "nvmf_tgt_poll_group_000", 00:20:36.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.134 "listen_address": { 00:20:36.134 "trtype": "TCP", 00:20:36.134 "adrfam": "IPv4", 00:20:36.134 "traddr": "10.0.0.2", 00:20:36.134 "trsvcid": "4420" 00:20:36.135 }, 00:20:36.135 "peer_address": { 00:20:36.135 "trtype": "TCP", 00:20:36.135 "adrfam": "IPv4", 00:20:36.135 "traddr": "10.0.0.1", 00:20:36.135 "trsvcid": "50194" 00:20:36.135 }, 00:20:36.135 "auth": { 00:20:36.135 "state": "completed", 00:20:36.135 "digest": "sha256", 00:20:36.135 "dhgroup": "ffdhe8192" 00:20:36.135 } 00:20:36.135 } 00:20:36.135 ]' 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.135 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.393 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:36.393 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.767 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.703 00:20:38.703 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.703 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.703 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.962 { 00:20:38.962 "cntlid": 47, 00:20:38.962 "qid": 0, 00:20:38.962 "state": "enabled", 00:20:38.962 "thread": "nvmf_tgt_poll_group_000", 00:20:38.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.962 "listen_address": { 00:20:38.962 "trtype": "TCP", 00:20:38.962 "adrfam": "IPv4", 00:20:38.962 "traddr": "10.0.0.2", 00:20:38.962 "trsvcid": "4420" 00:20:38.962 }, 00:20:38.962 "peer_address": { 00:20:38.962 "trtype": "TCP", 00:20:38.962 "adrfam": "IPv4", 00:20:38.962 "traddr": "10.0.0.1", 00:20:38.962 "trsvcid": "50224" 00:20:38.962 }, 00:20:38.962 "auth": { 00:20:38.962 "state": "completed", 00:20:38.962 "digest": "sha256", 00:20:38.962 "dhgroup": "ffdhe8192" 00:20:38.962 } 00:20:38.962 } 00:20:38.962 ]' 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.962 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.220 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.220 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.220 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.480 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:39.480 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.418 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.676 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.934 00:20:40.934 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.934 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.934 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.192 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.192 { 00:20:41.192 "cntlid": 49, 00:20:41.192 "qid": 0, 00:20:41.192 "state": "enabled", 00:20:41.192 "thread": "nvmf_tgt_poll_group_000", 00:20:41.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.192 "listen_address": { 00:20:41.192 "trtype": "TCP", 00:20:41.192 "adrfam": "IPv4", 00:20:41.192 "traddr": "10.0.0.2", 00:20:41.192 "trsvcid": "4420" 00:20:41.192 }, 00:20:41.192 "peer_address": { 00:20:41.192 "trtype": "TCP", 00:20:41.192 "adrfam": "IPv4", 00:20:41.192 "traddr": "10.0.0.1", 00:20:41.192 "trsvcid": "50258" 00:20:41.193 }, 00:20:41.193 "auth": { 00:20:41.193 "state": "completed", 00:20:41.193 "digest": "sha384", 00:20:41.193 "dhgroup": "null" 00:20:41.193 } 00:20:41.193 } 00:20:41.193 ]' 00:20:41.193 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.193 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.193 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.451 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.451 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.451 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.451 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.451 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.709 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:41.709 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.642 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.900 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.465 00:20:43.465 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.465 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.465 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.723 { 00:20:43.723 "cntlid": 51, 00:20:43.723 "qid": 0, 00:20:43.723 "state": "enabled", 00:20:43.723 "thread": "nvmf_tgt_poll_group_000", 00:20:43.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.723 "listen_address": { 00:20:43.723 "trtype": "TCP", 00:20:43.723 "adrfam": "IPv4", 00:20:43.723 "traddr": "10.0.0.2", 00:20:43.723 "trsvcid": "4420" 00:20:43.723 }, 00:20:43.723 "peer_address": { 00:20:43.723 "trtype": "TCP", 00:20:43.723 "adrfam": "IPv4", 00:20:43.723 "traddr": "10.0.0.1", 00:20:43.723 "trsvcid": "52214" 00:20:43.723 }, 00:20:43.723 "auth": { 00:20:43.723 "state": "completed", 00:20:43.723 "digest": "sha384", 00:20:43.723 "dhgroup": "null" 00:20:43.723 } 00:20:43.723 } 00:20:43.723 ]' 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.723 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.980 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:43.980 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.912 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.478 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.736 00:20:45.736 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.736 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.736 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.993 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.993 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.993 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.994 { 00:20:45.994 "cntlid": 53, 00:20:45.994 "qid": 0, 00:20:45.994 "state": "enabled", 00:20:45.994 "thread": "nvmf_tgt_poll_group_000", 00:20:45.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.994 "listen_address": { 00:20:45.994 "trtype": "TCP", 00:20:45.994 "adrfam": "IPv4", 00:20:45.994 "traddr": "10.0.0.2", 00:20:45.994 "trsvcid": "4420" 00:20:45.994 }, 00:20:45.994 "peer_address": { 00:20:45.994 "trtype": "TCP", 00:20:45.994 "adrfam": "IPv4", 00:20:45.994 "traddr": "10.0.0.1", 00:20:45.994 "trsvcid": "52238" 00:20:45.994 }, 00:20:45.994 "auth": { 00:20:45.994 "state": "completed", 00:20:45.994 "digest": "sha384", 00:20:45.994 "dhgroup": "null" 00:20:45.994 } 00:20:45.994 } 00:20:45.994 ]' 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.994 14:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.251 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:46.251 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:47.184 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.184 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.184 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.184 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.442 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.442 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.442 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.442 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.700 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.958 00:20:47.958 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.958 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.958 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.215 { 00:20:48.215 "cntlid": 55, 00:20:48.215 "qid": 0, 00:20:48.215 "state": "enabled", 00:20:48.215 "thread": "nvmf_tgt_poll_group_000", 00:20:48.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.215 "listen_address": { 00:20:48.215 "trtype": "TCP", 00:20:48.215 "adrfam": "IPv4", 00:20:48.215 "traddr": "10.0.0.2", 00:20:48.215 "trsvcid": "4420" 00:20:48.215 }, 00:20:48.215 "peer_address": { 00:20:48.215 "trtype": "TCP", 00:20:48.215 "adrfam": "IPv4", 00:20:48.215 "traddr": "10.0.0.1", 00:20:48.215 "trsvcid": "52270" 00:20:48.215 }, 00:20:48.215 "auth": { 00:20:48.215 "state": "completed", 00:20:48.215 "digest": "sha384", 00:20:48.215 "dhgroup": "null" 00:20:48.215 } 00:20:48.215 } 00:20:48.215 ]' 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.215 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.780 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:48.780 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.719 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.977 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.235 00:20:50.235 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.235 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.235 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.493 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.493 { 00:20:50.493 "cntlid": 57, 00:20:50.493 "qid": 0, 00:20:50.493 "state": "enabled", 00:20:50.493 "thread": "nvmf_tgt_poll_group_000", 00:20:50.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.493 "listen_address": { 00:20:50.494 "trtype": "TCP", 00:20:50.494 "adrfam": "IPv4", 00:20:50.494 "traddr": "10.0.0.2", 00:20:50.494 "trsvcid": "4420" 00:20:50.494 }, 00:20:50.494 "peer_address": { 00:20:50.494 "trtype": "TCP", 00:20:50.494 "adrfam": "IPv4", 00:20:50.494 "traddr": "10.0.0.1", 00:20:50.494 "trsvcid": "52282" 00:20:50.494 }, 00:20:50.494 "auth": { 00:20:50.494 "state": "completed", 00:20:50.494 "digest": "sha384", 00:20:50.494 "dhgroup": "ffdhe2048" 00:20:50.494 } 00:20:50.494 } 00:20:50.494 ]' 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.494 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.752 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:50.752 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.126 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.126 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:52.126 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.127 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.384 00:20:52.384 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.384 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.384 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.642 { 00:20:52.642 "cntlid": 59, 00:20:52.642 "qid": 0, 00:20:52.642 "state": "enabled", 00:20:52.642 "thread": "nvmf_tgt_poll_group_000", 00:20:52.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.642 "listen_address": { 00:20:52.642 "trtype": "TCP", 00:20:52.642 "adrfam": "IPv4", 00:20:52.642 "traddr": "10.0.0.2", 00:20:52.642 "trsvcid": "4420" 00:20:52.642 }, 00:20:52.642 "peer_address": { 00:20:52.642 "trtype": "TCP", 00:20:52.642 "adrfam": "IPv4", 00:20:52.642 "traddr": "10.0.0.1", 00:20:52.642 "trsvcid": "52312" 00:20:52.642 }, 00:20:52.642 "auth": { 00:20:52.642 "state": "completed", 00:20:52.642 "digest": "sha384", 00:20:52.642 "dhgroup": "ffdhe2048" 00:20:52.642 } 00:20:52.642 } 00:20:52.642 ]' 00:20:52.642 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.900 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.159 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:53.159 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:20:54.092 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.093 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.658 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.916 00:20:54.916 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.916 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.916 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.174 { 00:20:55.174 "cntlid": 61, 00:20:55.174 "qid": 0, 00:20:55.174 "state": "enabled", 00:20:55.174 "thread": "nvmf_tgt_poll_group_000", 00:20:55.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.174 "listen_address": { 00:20:55.174 "trtype": "TCP", 00:20:55.174 "adrfam": "IPv4", 00:20:55.174 "traddr": "10.0.0.2", 00:20:55.174 "trsvcid": "4420" 00:20:55.174 }, 00:20:55.174 "peer_address": { 00:20:55.174 "trtype": "TCP", 00:20:55.174 "adrfam": "IPv4", 00:20:55.174 "traddr": "10.0.0.1", 00:20:55.174 "trsvcid": "54190" 00:20:55.174 }, 00:20:55.174 "auth": { 00:20:55.174 "state": "completed", 00:20:55.174 "digest": "sha384", 00:20:55.174 "dhgroup": "ffdhe2048" 00:20:55.174 } 00:20:55.174 } 00:20:55.174 ]' 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.174 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.740 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:55.740 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.674 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.932 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.190 00:20:57.190 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.190 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.190 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.448 { 00:20:57.448 "cntlid": 63, 00:20:57.448 "qid": 0, 00:20:57.448 "state": "enabled", 00:20:57.448 "thread": "nvmf_tgt_poll_group_000", 00:20:57.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.448 "listen_address": { 00:20:57.448 "trtype": "TCP", 00:20:57.448 "adrfam": "IPv4", 00:20:57.448 "traddr": "10.0.0.2", 00:20:57.448 "trsvcid": "4420" 00:20:57.448 }, 00:20:57.448 "peer_address": { 00:20:57.448 "trtype": "TCP", 00:20:57.448 "adrfam": "IPv4", 00:20:57.448 "traddr": "10.0.0.1", 00:20:57.448 "trsvcid": "54222" 00:20:57.448 }, 00:20:57.448 "auth": { 00:20:57.448 "state": "completed", 00:20:57.448 "digest": "sha384", 00:20:57.448 "dhgroup": "ffdhe2048" 00:20:57.448 } 00:20:57.448 } 00:20:57.448 ]' 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.448 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.708 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.708 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.708 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.708 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.708 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.965 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:57.965 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.899 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.157 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.722 00:20:59.722 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.722 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.722 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.987 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.987 { 00:20:59.987 "cntlid": 65, 00:20:59.987 "qid": 0, 00:20:59.987 "state": "enabled", 00:20:59.987 "thread": "nvmf_tgt_poll_group_000", 00:20:59.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.987 "listen_address": { 00:20:59.988 "trtype": "TCP", 00:20:59.988 "adrfam": "IPv4", 00:20:59.988 "traddr": "10.0.0.2", 00:20:59.988 "trsvcid": "4420" 00:20:59.988 }, 00:20:59.988 "peer_address": { 00:20:59.988 "trtype": "TCP", 00:20:59.988 "adrfam": "IPv4", 00:20:59.988 "traddr": "10.0.0.1", 00:20:59.988 "trsvcid": "54250" 00:20:59.988 }, 00:20:59.988 "auth": { 00:20:59.988 "state": "completed", 00:20:59.988 "digest": "sha384", 00:20:59.988 "dhgroup": "ffdhe3072" 00:20:59.988 } 00:20:59.988 } 00:20:59.988 ]' 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.988 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.246 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:00.246 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.619 14:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.185 00:21:02.185 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.185 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.185 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.444 { 00:21:02.444 "cntlid": 67, 00:21:02.444 "qid": 0, 00:21:02.444 "state": "enabled", 00:21:02.444 "thread": "nvmf_tgt_poll_group_000", 00:21:02.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.444 "listen_address": { 00:21:02.444 "trtype": "TCP", 00:21:02.444 "adrfam": "IPv4", 00:21:02.444 "traddr": "10.0.0.2", 00:21:02.444 "trsvcid": "4420" 00:21:02.444 }, 00:21:02.444 "peer_address": { 00:21:02.444 "trtype": "TCP", 00:21:02.444 "adrfam": "IPv4", 00:21:02.444 "traddr": "10.0.0.1", 00:21:02.444 "trsvcid": "54264" 00:21:02.444 }, 00:21:02.444 "auth": { 00:21:02.444 "state": "completed", 00:21:02.444 "digest": "sha384", 00:21:02.444 "dhgroup": "ffdhe3072" 00:21:02.444 } 00:21:02.444 } 00:21:02.444 ]' 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.444 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.728 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:02.728 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.683 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.941 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:03.941 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.941 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.941 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.941 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.942 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.507 00:21:04.507 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.507 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.507 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.765 { 00:21:04.765 "cntlid": 69, 00:21:04.765 "qid": 0, 00:21:04.765 "state": "enabled", 00:21:04.765 "thread": "nvmf_tgt_poll_group_000", 00:21:04.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.765 "listen_address": { 00:21:04.765 "trtype": "TCP", 00:21:04.765 "adrfam": "IPv4", 00:21:04.765 "traddr": "10.0.0.2", 00:21:04.765 "trsvcid": "4420" 00:21:04.765 }, 00:21:04.765 "peer_address": { 00:21:04.765 "trtype": "TCP", 00:21:04.765 "adrfam": "IPv4", 00:21:04.765 "traddr": "10.0.0.1", 00:21:04.765 "trsvcid": "36764" 00:21:04.765 }, 00:21:04.765 "auth": { 00:21:04.765 "state": "completed", 00:21:04.765 "digest": "sha384", 00:21:04.765 "dhgroup": "ffdhe3072" 00:21:04.765 } 00:21:04.765 } 00:21:04.765 ]' 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.765 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.766 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.766 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.766 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.766 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.766 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.024 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:05.024 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.958 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.216 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.781 00:21:06.781 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.781 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.781 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.040 { 00:21:07.040 "cntlid": 71, 00:21:07.040 "qid": 0, 00:21:07.040 "state": "enabled", 00:21:07.040 "thread": "nvmf_tgt_poll_group_000", 00:21:07.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.040 "listen_address": { 00:21:07.040 "trtype": "TCP", 00:21:07.040 "adrfam": "IPv4", 00:21:07.040 "traddr": "10.0.0.2", 00:21:07.040 "trsvcid": "4420" 00:21:07.040 }, 00:21:07.040 "peer_address": { 00:21:07.040 "trtype": "TCP", 00:21:07.040 "adrfam": "IPv4", 00:21:07.040 "traddr": "10.0.0.1", 00:21:07.040 "trsvcid": "36808" 00:21:07.040 }, 00:21:07.040 "auth": { 00:21:07.040 "state": "completed", 00:21:07.040 "digest": "sha384", 00:21:07.040 "dhgroup": "ffdhe3072" 00:21:07.040 } 00:21:07.040 } 00:21:07.040 ]' 00:21:07.040 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.040 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.606 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:07.606 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.539 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.798 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.364 00:21:09.364 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.364 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.364 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.622 { 00:21:09.622 "cntlid": 73, 00:21:09.622 "qid": 0, 00:21:09.622 "state": "enabled", 00:21:09.622 "thread": "nvmf_tgt_poll_group_000", 00:21:09.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.622 "listen_address": { 00:21:09.622 "trtype": "TCP", 00:21:09.622 "adrfam": "IPv4", 00:21:09.622 "traddr": "10.0.0.2", 00:21:09.622 "trsvcid": "4420" 00:21:09.622 }, 00:21:09.622 "peer_address": { 00:21:09.622 "trtype": "TCP", 00:21:09.622 "adrfam": "IPv4", 00:21:09.622 "traddr": "10.0.0.1", 00:21:09.622 "trsvcid": "36818" 00:21:09.622 }, 00:21:09.622 "auth": { 00:21:09.622 "state": "completed", 00:21:09.622 "digest": "sha384", 00:21:09.622 "dhgroup": "ffdhe4096" 00:21:09.622 } 00:21:09.622 } 00:21:09.622 ]' 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.622 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.880 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:09.880 14:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.814 14:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.072 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.330 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.589 00:21:11.589 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.589 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.589 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.847 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.847 { 00:21:11.847 "cntlid": 75, 00:21:11.847 "qid": 0, 00:21:11.848 "state": "enabled", 00:21:11.848 "thread": "nvmf_tgt_poll_group_000", 00:21:11.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.848 "listen_address": { 00:21:11.848 "trtype": "TCP", 00:21:11.848 "adrfam": "IPv4", 00:21:11.848 "traddr": "10.0.0.2", 00:21:11.848 "trsvcid": "4420" 00:21:11.848 }, 00:21:11.848 "peer_address": { 00:21:11.848 "trtype": "TCP", 00:21:11.848 "adrfam": "IPv4", 00:21:11.848 "traddr": "10.0.0.1", 00:21:11.848 "trsvcid": "36838" 00:21:11.848 }, 00:21:11.848 "auth": { 00:21:11.848 "state": "completed", 00:21:11.848 "digest": "sha384", 00:21:11.848 "dhgroup": "ffdhe4096" 00:21:11.848 } 00:21:11.848 } 00:21:11.848 ]' 00:21:11.848 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.848 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.848 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.848 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.848 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.105 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.105 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.105 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.363 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:12.363 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.298 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.556 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.557 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.123 00:21:14.123 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.123 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.123 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.381 { 00:21:14.381 "cntlid": 77, 00:21:14.381 "qid": 0, 00:21:14.381 "state": "enabled", 00:21:14.381 "thread": "nvmf_tgt_poll_group_000", 00:21:14.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.381 "listen_address": { 00:21:14.381 "trtype": "TCP", 00:21:14.381 "adrfam": "IPv4", 00:21:14.381 "traddr": "10.0.0.2", 00:21:14.381 "trsvcid": "4420" 00:21:14.381 }, 00:21:14.381 "peer_address": { 00:21:14.381 "trtype": "TCP", 00:21:14.381 "adrfam": "IPv4", 00:21:14.381 "traddr": "10.0.0.1", 00:21:14.381 "trsvcid": "56140" 00:21:14.381 }, 00:21:14.381 "auth": { 00:21:14.381 "state": "completed", 00:21:14.381 "digest": "sha384", 00:21:14.381 "dhgroup": "ffdhe4096" 00:21:14.381 } 00:21:14.381 } 00:21:14.381 ]' 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.381 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.639 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:14.639 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.574 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.139 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.397 00:21:16.397 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.397 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.397 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.655 { 00:21:16.655 "cntlid": 79, 00:21:16.655 "qid": 0, 00:21:16.655 "state": "enabled", 00:21:16.655 "thread": "nvmf_tgt_poll_group_000", 00:21:16.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.655 "listen_address": { 00:21:16.655 "trtype": "TCP", 00:21:16.655 "adrfam": "IPv4", 00:21:16.655 "traddr": "10.0.0.2", 00:21:16.655 "trsvcid": "4420" 00:21:16.655 }, 00:21:16.655 "peer_address": { 00:21:16.655 "trtype": "TCP", 00:21:16.655 "adrfam": "IPv4", 00:21:16.655 "traddr": "10.0.0.1", 00:21:16.655 "trsvcid": "56164" 00:21:16.655 }, 00:21:16.655 "auth": { 00:21:16.655 "state": "completed", 00:21:16.655 "digest": "sha384", 00:21:16.655 "dhgroup": "ffdhe4096" 00:21:16.655 } 00:21:16.655 } 00:21:16.655 ]' 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.655 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.913 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.913 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.913 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.171 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:17.171 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.105 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.363 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.930 00:21:18.930 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.930 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.930 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.188 { 00:21:19.188 "cntlid": 81, 00:21:19.188 "qid": 0, 00:21:19.188 "state": "enabled", 00:21:19.188 "thread": "nvmf_tgt_poll_group_000", 00:21:19.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.188 "listen_address": { 00:21:19.188 "trtype": "TCP", 00:21:19.188 "adrfam": "IPv4", 00:21:19.188 "traddr": "10.0.0.2", 00:21:19.188 "trsvcid": "4420" 00:21:19.188 }, 00:21:19.188 "peer_address": { 00:21:19.188 "trtype": "TCP", 00:21:19.188 "adrfam": "IPv4", 00:21:19.188 "traddr": "10.0.0.1", 00:21:19.188 "trsvcid": "56190" 00:21:19.188 }, 00:21:19.188 "auth": { 00:21:19.188 "state": "completed", 00:21:19.188 "digest": "sha384", 00:21:19.188 "dhgroup": "ffdhe6144" 00:21:19.188 } 00:21:19.188 } 00:21:19.188 ]' 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.188 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.446 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.446 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.446 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.704 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:19.704 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.651 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.909 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.477 00:21:21.749 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.750 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.750 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.015 { 00:21:22.015 "cntlid": 83, 00:21:22.015 "qid": 0, 00:21:22.015 "state": "enabled", 00:21:22.015 "thread": "nvmf_tgt_poll_group_000", 00:21:22.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.015 "listen_address": { 00:21:22.015 "trtype": "TCP", 00:21:22.015 "adrfam": "IPv4", 00:21:22.015 "traddr": "10.0.0.2", 00:21:22.015 "trsvcid": "4420" 00:21:22.015 }, 00:21:22.015 "peer_address": { 00:21:22.015 "trtype": "TCP", 00:21:22.015 "adrfam": "IPv4", 00:21:22.015 "traddr": "10.0.0.1", 00:21:22.015 "trsvcid": "56196" 00:21:22.015 }, 00:21:22.015 "auth": { 00:21:22.015 "state": "completed", 00:21:22.015 "digest": "sha384", 00:21:22.015 "dhgroup": "ffdhe6144" 00:21:22.015 } 00:21:22.015 } 00:21:22.015 ]' 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.015 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.273 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:22.273 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:23.210 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.210 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.210 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.210 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.469 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.469 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.469 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.469 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.727 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.293 00:21:24.293 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.293 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.293 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.551 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.551 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.552 { 00:21:24.552 "cntlid": 85, 00:21:24.552 "qid": 0, 00:21:24.552 "state": "enabled", 00:21:24.552 "thread": "nvmf_tgt_poll_group_000", 00:21:24.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.552 "listen_address": { 00:21:24.552 "trtype": "TCP", 00:21:24.552 "adrfam": "IPv4", 00:21:24.552 "traddr": "10.0.0.2", 00:21:24.552 "trsvcid": "4420" 00:21:24.552 }, 00:21:24.552 "peer_address": { 00:21:24.552 "trtype": "TCP", 00:21:24.552 "adrfam": "IPv4", 00:21:24.552 "traddr": "10.0.0.1", 00:21:24.552 "trsvcid": "36402" 00:21:24.552 }, 00:21:24.552 "auth": { 00:21:24.552 "state": "completed", 00:21:24.552 "digest": "sha384", 00:21:24.552 "dhgroup": "ffdhe6144" 00:21:24.552 } 00:21:24.552 } 00:21:24.552 ]' 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.552 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.836 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:24.836 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.810 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.068 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.636 00:21:26.636 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.636 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.636 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.204 { 00:21:27.204 "cntlid": 87, 00:21:27.204 "qid": 0, 00:21:27.204 "state": "enabled", 00:21:27.204 "thread": "nvmf_tgt_poll_group_000", 00:21:27.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.204 "listen_address": { 00:21:27.204 "trtype": "TCP", 00:21:27.204 "adrfam": "IPv4", 00:21:27.204 "traddr": "10.0.0.2", 00:21:27.204 "trsvcid": "4420" 00:21:27.204 }, 00:21:27.204 "peer_address": { 00:21:27.204 "trtype": "TCP", 00:21:27.204 "adrfam": "IPv4", 00:21:27.204 "traddr": "10.0.0.1", 00:21:27.204 "trsvcid": "36424" 00:21:27.204 }, 00:21:27.204 "auth": { 00:21:27.204 "state": "completed", 00:21:27.204 "digest": "sha384", 00:21:27.204 "dhgroup": "ffdhe6144" 00:21:27.204 } 00:21:27.204 } 00:21:27.204 ]' 00:21:27.204 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.204 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.462 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:27.462 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:28.408 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.409 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.667 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.603 00:21:29.603 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.603 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.603 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.861 { 00:21:29.861 "cntlid": 89, 00:21:29.861 "qid": 0, 00:21:29.861 "state": "enabled", 00:21:29.861 "thread": "nvmf_tgt_poll_group_000", 00:21:29.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.861 "listen_address": { 00:21:29.861 "trtype": "TCP", 00:21:29.861 "adrfam": "IPv4", 00:21:29.861 "traddr": "10.0.0.2", 00:21:29.861 "trsvcid": "4420" 00:21:29.861 }, 00:21:29.861 "peer_address": { 00:21:29.861 "trtype": "TCP", 00:21:29.861 "adrfam": "IPv4", 00:21:29.861 "traddr": "10.0.0.1", 00:21:29.861 "trsvcid": "36458" 00:21:29.861 }, 00:21:29.861 "auth": { 00:21:29.861 "state": "completed", 00:21:29.861 "digest": "sha384", 00:21:29.861 "dhgroup": "ffdhe8192" 00:21:29.861 } 00:21:29.861 } 00:21:29.861 ]' 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.861 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.119 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.119 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.119 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.119 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.119 14:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.377 14:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:30.377 14:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.313 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.570 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.504 00:21:32.504 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.504 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.504 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.763 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.763 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.763 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.763 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.021 { 00:21:33.021 "cntlid": 91, 00:21:33.021 "qid": 0, 00:21:33.021 "state": "enabled", 00:21:33.021 "thread": "nvmf_tgt_poll_group_000", 00:21:33.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.021 "listen_address": { 00:21:33.021 "trtype": "TCP", 00:21:33.021 "adrfam": "IPv4", 00:21:33.021 "traddr": "10.0.0.2", 00:21:33.021 "trsvcid": "4420" 00:21:33.021 }, 00:21:33.021 "peer_address": { 00:21:33.021 "trtype": "TCP", 00:21:33.021 "adrfam": "IPv4", 00:21:33.021 "traddr": "10.0.0.1", 00:21:33.021 "trsvcid": "36478" 00:21:33.021 }, 00:21:33.021 "auth": { 00:21:33.021 "state": "completed", 00:21:33.021 "digest": "sha384", 00:21:33.021 "dhgroup": "ffdhe8192" 00:21:33.021 } 00:21:33.021 } 00:21:33.021 ]' 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.021 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.022 14:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.280 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:33.280 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.216 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.475 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.411 00:21:35.411 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.411 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.411 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.671 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.671 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.671 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.671 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.929 { 00:21:35.929 "cntlid": 93, 00:21:35.929 "qid": 0, 00:21:35.929 "state": "enabled", 00:21:35.929 "thread": "nvmf_tgt_poll_group_000", 00:21:35.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.929 "listen_address": { 00:21:35.929 "trtype": "TCP", 00:21:35.929 "adrfam": "IPv4", 00:21:35.929 "traddr": "10.0.0.2", 00:21:35.929 "trsvcid": "4420" 00:21:35.929 }, 00:21:35.929 "peer_address": { 00:21:35.929 "trtype": "TCP", 00:21:35.929 "adrfam": "IPv4", 00:21:35.929 "traddr": "10.0.0.1", 00:21:35.929 "trsvcid": "46504" 00:21:35.929 }, 00:21:35.929 "auth": { 00:21:35.929 "state": "completed", 00:21:35.929 "digest": "sha384", 00:21:35.929 "dhgroup": "ffdhe8192" 00:21:35.929 } 00:21:35.929 } 00:21:35.929 ]' 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.929 14:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.189 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:36.189 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.564 14:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.500 00:21:38.500 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.500 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.500 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.758 { 00:21:38.758 "cntlid": 95, 00:21:38.758 "qid": 0, 00:21:38.758 "state": "enabled", 00:21:38.758 "thread": "nvmf_tgt_poll_group_000", 00:21:38.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.758 "listen_address": { 00:21:38.758 "trtype": "TCP", 00:21:38.758 "adrfam": "IPv4", 00:21:38.758 "traddr": "10.0.0.2", 00:21:38.758 "trsvcid": "4420" 00:21:38.758 }, 00:21:38.758 "peer_address": { 00:21:38.758 "trtype": "TCP", 00:21:38.758 "adrfam": "IPv4", 00:21:38.758 "traddr": "10.0.0.1", 00:21:38.758 "trsvcid": "46540" 00:21:38.758 }, 00:21:38.758 "auth": { 00:21:38.758 "state": "completed", 00:21:38.758 "digest": "sha384", 00:21:38.758 "dhgroup": "ffdhe8192" 00:21:38.758 } 00:21:38.758 } 00:21:38.758 ]' 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.758 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.016 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.016 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.016 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.276 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:39.276 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.213 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.471 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.730 00:21:40.730 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.730 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.730 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.988 { 00:21:40.988 "cntlid": 97, 00:21:40.988 "qid": 0, 00:21:40.988 "state": "enabled", 00:21:40.988 "thread": "nvmf_tgt_poll_group_000", 00:21:40.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.988 "listen_address": { 00:21:40.988 "trtype": "TCP", 00:21:40.988 "adrfam": "IPv4", 00:21:40.988 "traddr": "10.0.0.2", 00:21:40.988 "trsvcid": "4420" 00:21:40.988 }, 00:21:40.988 "peer_address": { 00:21:40.988 "trtype": "TCP", 00:21:40.988 "adrfam": "IPv4", 00:21:40.988 "traddr": "10.0.0.1", 00:21:40.988 "trsvcid": "46548" 00:21:40.988 }, 00:21:40.988 "auth": { 00:21:40.988 "state": "completed", 00:21:40.988 "digest": "sha512", 00:21:40.988 "dhgroup": "null" 00:21:40.988 } 00:21:40.988 } 00:21:40.988 ]' 00:21:40.988 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.246 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.246 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.246 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.247 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.247 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.247 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.247 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.505 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:41.505 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.442 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.702 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.962 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.962 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.962 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.962 14:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.222 00:21:43.222 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.222 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.222 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.481 { 00:21:43.481 "cntlid": 99, 00:21:43.481 "qid": 0, 00:21:43.481 "state": "enabled", 00:21:43.481 "thread": "nvmf_tgt_poll_group_000", 00:21:43.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.481 "listen_address": { 00:21:43.481 "trtype": "TCP", 00:21:43.481 "adrfam": "IPv4", 00:21:43.481 "traddr": "10.0.0.2", 00:21:43.481 "trsvcid": "4420" 00:21:43.481 }, 00:21:43.481 "peer_address": { 00:21:43.481 "trtype": "TCP", 00:21:43.481 "adrfam": "IPv4", 00:21:43.481 "traddr": "10.0.0.1", 00:21:43.481 "trsvcid": "46562" 00:21:43.481 }, 00:21:43.481 "auth": { 00:21:43.481 "state": "completed", 00:21:43.481 "digest": "sha512", 00:21:43.481 "dhgroup": "null" 00:21:43.481 } 00:21:43.481 } 00:21:43.481 ]' 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.481 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.739 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:43.739 14:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:44.674 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.674 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.674 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.674 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.932 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.932 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.932 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.191 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.449 00:21:45.449 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.449 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.449 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.707 { 00:21:45.707 "cntlid": 101, 00:21:45.707 "qid": 0, 00:21:45.707 "state": "enabled", 00:21:45.707 "thread": "nvmf_tgt_poll_group_000", 00:21:45.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.707 "listen_address": { 00:21:45.707 "trtype": "TCP", 00:21:45.707 "adrfam": "IPv4", 00:21:45.707 "traddr": "10.0.0.2", 00:21:45.707 "trsvcid": "4420" 00:21:45.707 }, 00:21:45.707 "peer_address": { 00:21:45.707 "trtype": "TCP", 00:21:45.707 "adrfam": "IPv4", 00:21:45.707 "traddr": "10.0.0.1", 00:21:45.707 "trsvcid": "42788" 00:21:45.707 }, 00:21:45.707 "auth": { 00:21:45.707 "state": "completed", 00:21:45.707 "digest": "sha512", 00:21:45.707 "dhgroup": "null" 00:21:45.707 } 00:21:45.707 } 00:21:45.707 ]' 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.707 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.276 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:46.276 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.213 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.489 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.768 00:21:47.769 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.769 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.769 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.034 { 00:21:48.034 "cntlid": 103, 00:21:48.034 "qid": 0, 00:21:48.034 "state": "enabled", 00:21:48.034 "thread": "nvmf_tgt_poll_group_000", 00:21:48.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.034 "listen_address": { 00:21:48.034 "trtype": "TCP", 00:21:48.034 "adrfam": "IPv4", 00:21:48.034 "traddr": "10.0.0.2", 00:21:48.034 "trsvcid": "4420" 00:21:48.034 }, 00:21:48.034 "peer_address": { 00:21:48.034 "trtype": "TCP", 00:21:48.034 "adrfam": "IPv4", 00:21:48.034 "traddr": "10.0.0.1", 00:21:48.034 "trsvcid": "42808" 00:21:48.034 }, 00:21:48.034 "auth": { 00:21:48.034 "state": "completed", 00:21:48.034 "digest": "sha512", 00:21:48.034 "dhgroup": "null" 00:21:48.034 } 00:21:48.034 } 00:21:48.034 ]' 00:21:48.034 14:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.034 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.034 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.034 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:48.034 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.291 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.291 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.291 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.548 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:48.548 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.483 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.743 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.311 00:21:50.311 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.311 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.311 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.569 { 00:21:50.569 "cntlid": 105, 00:21:50.569 "qid": 0, 00:21:50.569 "state": "enabled", 00:21:50.569 "thread": "nvmf_tgt_poll_group_000", 00:21:50.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.569 "listen_address": { 00:21:50.569 "trtype": "TCP", 00:21:50.569 "adrfam": "IPv4", 00:21:50.569 "traddr": "10.0.0.2", 00:21:50.569 "trsvcid": "4420" 00:21:50.569 }, 00:21:50.569 "peer_address": { 00:21:50.569 "trtype": "TCP", 00:21:50.569 "adrfam": "IPv4", 00:21:50.569 "traddr": "10.0.0.1", 00:21:50.569 "trsvcid": "42820" 00:21:50.569 }, 00:21:50.569 "auth": { 00:21:50.569 "state": "completed", 00:21:50.569 "digest": "sha512", 00:21:50.569 "dhgroup": "ffdhe2048" 00:21:50.569 } 00:21:50.569 } 00:21:50.569 ]' 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.569 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.828 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:50.828 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.830 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.088 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.656 00:21:52.656 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.656 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.656 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.915 { 00:21:52.915 "cntlid": 107, 00:21:52.915 "qid": 0, 00:21:52.915 "state": "enabled", 00:21:52.915 "thread": "nvmf_tgt_poll_group_000", 00:21:52.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.915 "listen_address": { 00:21:52.915 "trtype": "TCP", 00:21:52.915 "adrfam": "IPv4", 00:21:52.915 "traddr": "10.0.0.2", 00:21:52.915 "trsvcid": "4420" 00:21:52.915 }, 00:21:52.915 "peer_address": { 00:21:52.915 "trtype": "TCP", 00:21:52.915 "adrfam": "IPv4", 00:21:52.915 "traddr": "10.0.0.1", 00:21:52.915 "trsvcid": "42844" 00:21:52.915 }, 00:21:52.915 "auth": { 00:21:52.915 "state": "completed", 00:21:52.915 "digest": "sha512", 00:21:52.915 "dhgroup": "ffdhe2048" 00:21:52.915 } 00:21:52.915 } 00:21:52.915 ]' 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.915 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.175 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:53.175 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:21:54.111 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.370 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.628 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.888 00:21:54.888 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.888 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.888 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.147 { 00:21:55.147 "cntlid": 109, 00:21:55.147 "qid": 0, 00:21:55.147 "state": "enabled", 00:21:55.147 "thread": "nvmf_tgt_poll_group_000", 00:21:55.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.147 "listen_address": { 00:21:55.147 "trtype": "TCP", 00:21:55.147 "adrfam": "IPv4", 00:21:55.147 "traddr": "10.0.0.2", 00:21:55.147 "trsvcid": "4420" 00:21:55.147 }, 00:21:55.147 "peer_address": { 00:21:55.147 "trtype": "TCP", 00:21:55.147 "adrfam": "IPv4", 00:21:55.147 "traddr": "10.0.0.1", 00:21:55.147 "trsvcid": "42636" 00:21:55.147 }, 00:21:55.147 "auth": { 00:21:55.147 "state": "completed", 00:21:55.147 "digest": "sha512", 00:21:55.147 "dhgroup": "ffdhe2048" 00:21:55.147 } 00:21:55.147 } 00:21:55.147 ]' 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:55.147 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.405 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.405 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.405 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.662 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:55.662 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.597 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.855 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.124 00:21:57.124 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.124 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.124 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.384 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.384 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.385 { 00:21:57.385 "cntlid": 111, 00:21:57.385 "qid": 0, 00:21:57.385 "state": "enabled", 00:21:57.385 "thread": "nvmf_tgt_poll_group_000", 00:21:57.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.385 "listen_address": { 00:21:57.385 "trtype": "TCP", 00:21:57.385 "adrfam": "IPv4", 00:21:57.385 "traddr": "10.0.0.2", 00:21:57.385 "trsvcid": "4420" 00:21:57.385 }, 00:21:57.385 "peer_address": { 00:21:57.385 "trtype": "TCP", 00:21:57.385 "adrfam": "IPv4", 00:21:57.385 "traddr": "10.0.0.1", 00:21:57.385 "trsvcid": "42676" 00:21:57.385 }, 00:21:57.385 "auth": { 00:21:57.385 "state": "completed", 00:21:57.385 "digest": "sha512", 00:21:57.385 "dhgroup": "ffdhe2048" 00:21:57.385 } 00:21:57.385 } 00:21:57.385 ]' 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.385 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.643 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.643 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.643 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.901 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:57.901 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.835 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.094 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.660 00:21:59.660 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.660 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.660 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.918 { 00:21:59.918 "cntlid": 113, 00:21:59.918 "qid": 0, 00:21:59.918 "state": "enabled", 00:21:59.918 "thread": "nvmf_tgt_poll_group_000", 00:21:59.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.918 "listen_address": { 00:21:59.918 "trtype": "TCP", 00:21:59.918 "adrfam": "IPv4", 00:21:59.918 "traddr": "10.0.0.2", 00:21:59.918 "trsvcid": "4420" 00:21:59.918 }, 00:21:59.918 "peer_address": { 00:21:59.918 "trtype": "TCP", 00:21:59.918 "adrfam": "IPv4", 00:21:59.918 "traddr": "10.0.0.1", 00:21:59.918 "trsvcid": "42712" 00:21:59.918 }, 00:21:59.918 "auth": { 00:21:59.918 "state": "completed", 00:21:59.918 "digest": "sha512", 00:21:59.918 "dhgroup": "ffdhe3072" 00:21:59.918 } 00:21:59.918 } 00:21:59.918 ]' 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.918 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.176 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:00.176 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:01.111 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.111 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.111 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.111 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.371 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.371 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.371 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.371 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.630 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.888 00:22:01.888 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.888 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.888 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.146 { 00:22:02.146 "cntlid": 115, 00:22:02.146 "qid": 0, 00:22:02.146 "state": "enabled", 00:22:02.146 "thread": "nvmf_tgt_poll_group_000", 00:22:02.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.146 "listen_address": { 00:22:02.146 "trtype": "TCP", 00:22:02.146 "adrfam": "IPv4", 00:22:02.146 "traddr": "10.0.0.2", 00:22:02.146 "trsvcid": "4420" 00:22:02.146 }, 00:22:02.146 "peer_address": { 00:22:02.146 "trtype": "TCP", 00:22:02.146 "adrfam": "IPv4", 00:22:02.146 "traddr": "10.0.0.1", 00:22:02.146 "trsvcid": "42736" 00:22:02.146 }, 00:22:02.146 "auth": { 00:22:02.146 "state": "completed", 00:22:02.146 "digest": "sha512", 00:22:02.146 "dhgroup": "ffdhe3072" 00:22:02.146 } 00:22:02.146 } 00:22:02.146 ]' 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.146 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.404 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.404 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.404 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.662 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:02.662 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:03.599 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.600 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.858 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.427 00:22:04.427 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.427 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.427 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.686 { 00:22:04.686 "cntlid": 117, 00:22:04.686 "qid": 0, 00:22:04.686 "state": "enabled", 00:22:04.686 "thread": "nvmf_tgt_poll_group_000", 00:22:04.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.686 "listen_address": { 00:22:04.686 "trtype": "TCP", 00:22:04.686 "adrfam": "IPv4", 00:22:04.686 "traddr": "10.0.0.2", 00:22:04.686 "trsvcid": "4420" 00:22:04.686 }, 00:22:04.686 "peer_address": { 00:22:04.686 "trtype": "TCP", 00:22:04.686 "adrfam": "IPv4", 00:22:04.686 "traddr": "10.0.0.1", 00:22:04.686 "trsvcid": "60860" 00:22:04.686 }, 00:22:04.686 "auth": { 00:22:04.686 "state": "completed", 00:22:04.686 "digest": "sha512", 00:22:04.686 "dhgroup": "ffdhe3072" 00:22:04.686 } 00:22:04.686 } 00:22:04.686 ]' 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.686 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.944 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:04.944 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:05.880 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.448 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.707 00:22:06.707 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.707 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.707 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.965 { 00:22:06.965 "cntlid": 119, 00:22:06.965 "qid": 0, 00:22:06.965 "state": "enabled", 00:22:06.965 "thread": "nvmf_tgt_poll_group_000", 00:22:06.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.965 "listen_address": { 00:22:06.965 "trtype": "TCP", 00:22:06.965 "adrfam": "IPv4", 00:22:06.965 "traddr": "10.0.0.2", 00:22:06.965 "trsvcid": "4420" 00:22:06.965 }, 00:22:06.965 "peer_address": { 00:22:06.965 "trtype": "TCP", 00:22:06.965 "adrfam": "IPv4", 00:22:06.965 "traddr": "10.0.0.1", 00:22:06.965 "trsvcid": "60888" 00:22:06.965 }, 00:22:06.965 "auth": { 00:22:06.965 "state": "completed", 00:22:06.965 "digest": "sha512", 00:22:06.965 "dhgroup": "ffdhe3072" 00:22:06.965 } 00:22:06.965 } 00:22:06.965 ]' 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.965 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.222 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.222 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.222 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.480 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:07.480 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.433 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.692 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.950 00:22:08.950 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.950 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.950 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.517 { 00:22:09.517 "cntlid": 121, 00:22:09.517 "qid": 0, 00:22:09.517 "state": "enabled", 00:22:09.517 "thread": "nvmf_tgt_poll_group_000", 00:22:09.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.517 "listen_address": { 00:22:09.517 "trtype": "TCP", 00:22:09.517 "adrfam": "IPv4", 00:22:09.517 "traddr": "10.0.0.2", 00:22:09.517 "trsvcid": "4420" 00:22:09.517 }, 00:22:09.517 "peer_address": { 00:22:09.517 "trtype": "TCP", 00:22:09.517 "adrfam": "IPv4", 00:22:09.517 "traddr": "10.0.0.1", 00:22:09.517 "trsvcid": "60914" 00:22:09.517 }, 00:22:09.517 "auth": { 00:22:09.517 "state": "completed", 00:22:09.517 "digest": "sha512", 00:22:09.517 "dhgroup": "ffdhe4096" 00:22:09.517 } 00:22:09.517 } 00:22:09.517 ]' 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.517 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.518 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.777 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:09.777 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.714 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.972 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.540 00:22:11.540 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.540 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.540 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.798 { 00:22:11.798 "cntlid": 123, 00:22:11.798 "qid": 0, 00:22:11.798 "state": "enabled", 00:22:11.798 "thread": "nvmf_tgt_poll_group_000", 00:22:11.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.798 "listen_address": { 00:22:11.798 "trtype": "TCP", 00:22:11.798 "adrfam": "IPv4", 00:22:11.798 "traddr": "10.0.0.2", 00:22:11.798 "trsvcid": "4420" 00:22:11.798 }, 00:22:11.798 "peer_address": { 00:22:11.798 "trtype": "TCP", 00:22:11.798 "adrfam": "IPv4", 00:22:11.798 "traddr": "10.0.0.1", 00:22:11.798 "trsvcid": "60942" 00:22:11.798 }, 00:22:11.798 "auth": { 00:22:11.798 "state": "completed", 00:22:11.798 "digest": "sha512", 00:22:11.798 "dhgroup": "ffdhe4096" 00:22:11.798 } 00:22:11.798 } 00:22:11.798 ]' 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.798 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.056 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:12.056 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:12.990 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.248 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.813 00:22:13.813 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.813 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.813 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.072 { 00:22:14.072 "cntlid": 125, 00:22:14.072 "qid": 0, 00:22:14.072 "state": "enabled", 00:22:14.072 "thread": "nvmf_tgt_poll_group_000", 00:22:14.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.072 "listen_address": { 00:22:14.072 "trtype": "TCP", 00:22:14.072 "adrfam": "IPv4", 00:22:14.072 "traddr": "10.0.0.2", 00:22:14.072 "trsvcid": "4420" 00:22:14.072 }, 00:22:14.072 "peer_address": { 00:22:14.072 "trtype": "TCP", 00:22:14.072 "adrfam": "IPv4", 00:22:14.072 "traddr": "10.0.0.1", 00:22:14.072 "trsvcid": "58952" 00:22:14.072 }, 00:22:14.072 "auth": { 00:22:14.072 "state": "completed", 00:22:14.072 "digest": "sha512", 00:22:14.072 "dhgroup": "ffdhe4096" 00:22:14.072 } 00:22:14.072 } 00:22:14.072 ]' 00:22:14.072 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.072 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.331 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:14.331 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:15.267 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.267 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.267 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.267 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.525 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.525 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.525 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.525 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.816 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.098 00:22:16.098 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.098 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.098 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.357 { 00:22:16.357 "cntlid": 127, 00:22:16.357 "qid": 0, 00:22:16.357 "state": "enabled", 00:22:16.357 "thread": "nvmf_tgt_poll_group_000", 00:22:16.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.357 "listen_address": { 00:22:16.357 "trtype": "TCP", 00:22:16.357 "adrfam": "IPv4", 00:22:16.357 "traddr": "10.0.0.2", 00:22:16.357 "trsvcid": "4420" 00:22:16.357 }, 00:22:16.357 "peer_address": { 00:22:16.357 "trtype": "TCP", 00:22:16.357 "adrfam": "IPv4", 00:22:16.357 "traddr": "10.0.0.1", 00:22:16.357 "trsvcid": "58974" 00:22:16.357 }, 00:22:16.357 "auth": { 00:22:16.357 "state": "completed", 00:22:16.357 "digest": "sha512", 00:22:16.357 "dhgroup": "ffdhe4096" 00:22:16.357 } 00:22:16.357 } 00:22:16.357 ]' 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.357 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.615 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.615 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.615 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.873 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:16.873 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.809 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.067 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.635 00:22:18.635 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.635 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.635 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.893 { 00:22:18.893 "cntlid": 129, 00:22:18.893 "qid": 0, 00:22:18.893 "state": "enabled", 00:22:18.893 "thread": "nvmf_tgt_poll_group_000", 00:22:18.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.893 "listen_address": { 00:22:18.893 "trtype": "TCP", 00:22:18.893 "adrfam": "IPv4", 00:22:18.893 "traddr": "10.0.0.2", 00:22:18.893 "trsvcid": "4420" 00:22:18.893 }, 00:22:18.893 "peer_address": { 00:22:18.893 "trtype": "TCP", 00:22:18.893 "adrfam": "IPv4", 00:22:18.893 "traddr": "10.0.0.1", 00:22:18.893 "trsvcid": "58988" 00:22:18.893 }, 00:22:18.893 "auth": { 00:22:18.893 "state": "completed", 00:22:18.893 "digest": "sha512", 00:22:18.893 "dhgroup": "ffdhe6144" 00:22:18.893 } 00:22:18.893 } 00:22:18.893 ]' 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.893 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.152 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:19.152 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.152 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.152 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.152 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.410 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:19.410 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.347 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.605 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.539 00:22:21.539 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.539 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.539 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.797 { 00:22:21.797 "cntlid": 131, 00:22:21.797 "qid": 0, 00:22:21.797 "state": "enabled", 00:22:21.797 "thread": "nvmf_tgt_poll_group_000", 00:22:21.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.797 "listen_address": { 00:22:21.797 "trtype": "TCP", 00:22:21.797 "adrfam": "IPv4", 00:22:21.797 "traddr": "10.0.0.2", 00:22:21.797 "trsvcid": "4420" 00:22:21.797 }, 00:22:21.797 "peer_address": { 00:22:21.797 "trtype": "TCP", 00:22:21.797 "adrfam": "IPv4", 00:22:21.797 "traddr": "10.0.0.1", 00:22:21.797 "trsvcid": "59012" 00:22:21.797 }, 00:22:21.797 "auth": { 00:22:21.797 "state": "completed", 00:22:21.797 "digest": "sha512", 00:22:21.797 "dhgroup": "ffdhe6144" 00:22:21.797 } 00:22:21.797 } 00:22:21.797 ]' 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.797 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.055 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:22.055 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.989 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.247 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.813 00:22:23.813 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.813 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.813 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.071 { 00:22:24.071 "cntlid": 133, 00:22:24.071 "qid": 0, 00:22:24.071 "state": "enabled", 00:22:24.071 "thread": "nvmf_tgt_poll_group_000", 00:22:24.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.071 "listen_address": { 00:22:24.071 "trtype": "TCP", 00:22:24.071 "adrfam": "IPv4", 00:22:24.071 "traddr": "10.0.0.2", 00:22:24.071 "trsvcid": "4420" 00:22:24.071 }, 00:22:24.071 "peer_address": { 00:22:24.071 "trtype": "TCP", 00:22:24.071 "adrfam": "IPv4", 00:22:24.071 "traddr": "10.0.0.1", 00:22:24.071 "trsvcid": "58684" 00:22:24.071 }, 00:22:24.071 "auth": { 00:22:24.071 "state": "completed", 00:22:24.071 "digest": "sha512", 00:22:24.071 "dhgroup": "ffdhe6144" 00:22:24.071 } 00:22:24.071 } 00:22:24.071 ]' 00:22:24.071 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.329 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.587 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:24.587 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.520 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.086 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.652 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.652 { 00:22:26.652 "cntlid": 135, 00:22:26.652 "qid": 0, 00:22:26.652 "state": "enabled", 00:22:26.652 "thread": "nvmf_tgt_poll_group_000", 00:22:26.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.652 "listen_address": { 00:22:26.652 "trtype": "TCP", 00:22:26.652 "adrfam": "IPv4", 00:22:26.652 "traddr": "10.0.0.2", 00:22:26.652 "trsvcid": "4420" 00:22:26.652 }, 00:22:26.652 "peer_address": { 00:22:26.652 "trtype": "TCP", 00:22:26.652 "adrfam": "IPv4", 00:22:26.652 "traddr": "10.0.0.1", 00:22:26.652 "trsvcid": "58708" 00:22:26.652 }, 00:22:26.652 "auth": { 00:22:26.652 "state": "completed", 00:22:26.652 "digest": "sha512", 00:22:26.652 "dhgroup": "ffdhe6144" 00:22:26.652 } 00:22:26.652 } 00:22:26.652 ]' 00:22:26.652 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.910 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.168 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:27.168 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.101 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.359 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.299 00:22:29.299 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.299 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.299 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.557 { 00:22:29.557 "cntlid": 137, 00:22:29.557 "qid": 0, 00:22:29.557 "state": "enabled", 00:22:29.557 "thread": "nvmf_tgt_poll_group_000", 00:22:29.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.557 "listen_address": { 00:22:29.557 "trtype": "TCP", 00:22:29.557 "adrfam": "IPv4", 00:22:29.557 "traddr": "10.0.0.2", 00:22:29.557 "trsvcid": "4420" 00:22:29.557 }, 00:22:29.557 "peer_address": { 00:22:29.557 "trtype": "TCP", 00:22:29.557 "adrfam": "IPv4", 00:22:29.557 "traddr": "10.0.0.1", 00:22:29.557 "trsvcid": "58730" 00:22:29.557 }, 00:22:29.557 "auth": { 00:22:29.557 "state": "completed", 00:22:29.557 "digest": "sha512", 00:22:29.557 "dhgroup": "ffdhe8192" 00:22:29.557 } 00:22:29.557 } 00:22:29.557 ]' 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.557 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.815 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:29.815 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.188 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.188 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.122 00:22:32.122 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.122 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.122 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.380 { 00:22:32.380 "cntlid": 139, 00:22:32.380 "qid": 0, 00:22:32.380 "state": "enabled", 00:22:32.380 "thread": "nvmf_tgt_poll_group_000", 00:22:32.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.380 "listen_address": { 00:22:32.380 "trtype": "TCP", 00:22:32.380 "adrfam": "IPv4", 00:22:32.380 "traddr": "10.0.0.2", 00:22:32.380 "trsvcid": "4420" 00:22:32.380 }, 00:22:32.380 "peer_address": { 00:22:32.380 "trtype": "TCP", 00:22:32.380 "adrfam": "IPv4", 00:22:32.380 "traddr": "10.0.0.1", 00:22:32.380 "trsvcid": "58746" 00:22:32.380 }, 00:22:32.380 "auth": { 00:22:32.380 "state": "completed", 00:22:32.380 "digest": "sha512", 00:22:32.380 "dhgroup": "ffdhe8192" 00:22:32.380 } 00:22:32.380 } 00:22:32.380 ]' 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.380 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.638 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.638 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.638 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.897 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:32.897 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: --dhchap-ctrl-secret DHHC-1:02:YmNlZTU4NDE3MWRiNGJlMDMzYjRlZjUxZjg4MGZlZTE2NzM3ZWQ1OTI4ODgwNDFiDtozkg==: 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.835 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.094 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.033 00:22:35.033 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.033 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.033 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.291 { 00:22:35.291 "cntlid": 141, 00:22:35.291 "qid": 0, 00:22:35.291 "state": "enabled", 00:22:35.291 "thread": "nvmf_tgt_poll_group_000", 00:22:35.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.291 "listen_address": { 00:22:35.291 "trtype": "TCP", 00:22:35.291 "adrfam": "IPv4", 00:22:35.291 "traddr": "10.0.0.2", 00:22:35.291 "trsvcid": "4420" 00:22:35.291 }, 00:22:35.291 "peer_address": { 00:22:35.291 "trtype": "TCP", 00:22:35.291 "adrfam": "IPv4", 00:22:35.291 "traddr": "10.0.0.1", 00:22:35.291 "trsvcid": "39178" 00:22:35.291 }, 00:22:35.291 "auth": { 00:22:35.291 "state": "completed", 00:22:35.291 "digest": "sha512", 00:22:35.291 "dhgroup": "ffdhe8192" 00:22:35.291 } 00:22:35.291 } 00:22:35.291 ]' 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.291 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.860 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:35.860 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc2Y2UyNTBmYTNiMGRmNzEzZDQwYWEzMjAzMDcwYmbdJk/J: 00:22:36.797 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.798 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.056 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.995 00:22:37.995 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.995 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.995 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.253 { 00:22:38.253 "cntlid": 143, 00:22:38.253 "qid": 0, 00:22:38.253 "state": "enabled", 00:22:38.253 "thread": "nvmf_tgt_poll_group_000", 00:22:38.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.253 "listen_address": { 00:22:38.253 "trtype": "TCP", 00:22:38.253 "adrfam": "IPv4", 00:22:38.253 "traddr": "10.0.0.2", 00:22:38.253 "trsvcid": "4420" 00:22:38.253 }, 00:22:38.253 "peer_address": { 00:22:38.253 "trtype": "TCP", 00:22:38.253 "adrfam": "IPv4", 00:22:38.253 "traddr": "10.0.0.1", 00:22:38.253 "trsvcid": "39194" 00:22:38.253 }, 00:22:38.253 "auth": { 00:22:38.253 "state": "completed", 00:22:38.253 "digest": "sha512", 00:22:38.253 "dhgroup": "ffdhe8192" 00:22:38.253 } 00:22:38.253 } 00:22:38.253 ]' 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.253 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.511 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:38.511 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:39.448 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.706 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.964 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.965 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.965 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.965 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.903 00:22:40.903 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.903 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.903 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.161 { 00:22:41.161 "cntlid": 145, 00:22:41.161 "qid": 0, 00:22:41.161 "state": "enabled", 00:22:41.161 "thread": "nvmf_tgt_poll_group_000", 00:22:41.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.161 "listen_address": { 00:22:41.161 "trtype": "TCP", 00:22:41.161 "adrfam": "IPv4", 00:22:41.161 "traddr": "10.0.0.2", 00:22:41.161 "trsvcid": "4420" 00:22:41.161 }, 00:22:41.161 "peer_address": { 00:22:41.161 "trtype": "TCP", 00:22:41.161 "adrfam": "IPv4", 00:22:41.161 "traddr": "10.0.0.1", 00:22:41.161 "trsvcid": "39232" 00:22:41.161 }, 00:22:41.161 "auth": { 00:22:41.161 "state": "completed", 00:22:41.161 "digest": "sha512", 00:22:41.161 "dhgroup": "ffdhe8192" 00:22:41.161 } 00:22:41.161 } 00:22:41.161 ]' 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.161 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.761 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:41.761 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGY4ZjExYzE4MjdjZjZkMWUyYzNlYTNkMmY2YjllODZhYjI1ZWQwNTgzY2M1MGM1q3J9vw==: --dhchap-ctrl-secret DHHC-1:03:ODY0ZTQwMjhlM2RmNTk4MDU4NTVjMTFmMmQ1NzIyOWFjMTVjZGE1YjQ4NzI0NGJmNTIxZTIyM2I1MzI4YmM2YlhOMuo=: 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:42.697 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:43.633 request: 00:22:43.633 { 00:22:43.633 "name": "nvme0", 00:22:43.633 "trtype": "tcp", 00:22:43.633 "traddr": "10.0.0.2", 00:22:43.633 "adrfam": "ipv4", 00:22:43.633 "trsvcid": "4420", 00:22:43.633 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:43.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.633 "prchk_reftag": false, 00:22:43.633 "prchk_guard": false, 00:22:43.633 "hdgst": false, 00:22:43.633 "ddgst": false, 00:22:43.633 "dhchap_key": "key2", 00:22:43.633 "allow_unrecognized_csi": false, 00:22:43.633 "method": "bdev_nvme_attach_controller", 00:22:43.633 "req_id": 1 00:22:43.633 } 00:22:43.633 Got JSON-RPC error response 00:22:43.633 response: 00:22:43.633 { 00:22:43.633 "code": -5, 00:22:43.633 "message": "Input/output error" 00:22:43.633 } 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.633 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:44.575 request: 00:22:44.575 { 00:22:44.575 "name": "nvme0", 00:22:44.575 "trtype": "tcp", 00:22:44.575 "traddr": "10.0.0.2", 00:22:44.575 "adrfam": "ipv4", 00:22:44.575 "trsvcid": "4420", 00:22:44.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.575 "prchk_reftag": false, 00:22:44.575 "prchk_guard": false, 00:22:44.575 "hdgst": false, 00:22:44.575 "ddgst": false, 00:22:44.575 "dhchap_key": "key1", 00:22:44.575 "dhchap_ctrlr_key": "ckey2", 00:22:44.575 "allow_unrecognized_csi": false, 00:22:44.575 "method": "bdev_nvme_attach_controller", 00:22:44.575 "req_id": 1 00:22:44.575 } 00:22:44.575 Got JSON-RPC error response 00:22:44.575 response: 00:22:44.575 { 00:22:44.575 "code": -5, 00:22:44.575 "message": "Input/output error" 00:22:44.575 } 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.575 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.144 request: 00:22:45.144 { 00:22:45.144 "name": "nvme0", 00:22:45.144 "trtype": "tcp", 00:22:45.144 "traddr": "10.0.0.2", 00:22:45.144 "adrfam": "ipv4", 00:22:45.144 "trsvcid": "4420", 00:22:45.144 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.144 "prchk_reftag": false, 00:22:45.144 "prchk_guard": false, 00:22:45.144 "hdgst": false, 00:22:45.144 "ddgst": false, 00:22:45.144 "dhchap_key": "key1", 00:22:45.144 "dhchap_ctrlr_key": "ckey1", 00:22:45.144 "allow_unrecognized_csi": false, 00:22:45.144 "method": "bdev_nvme_attach_controller", 00:22:45.144 "req_id": 1 00:22:45.144 } 00:22:45.144 Got JSON-RPC error response 00:22:45.144 response: 00:22:45.144 { 00:22:45.144 "code": -5, 00:22:45.144 "message": "Input/output error" 00:22:45.144 } 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.144 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.404 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.404 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1374972 00:22:45.404 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1374972 ']' 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1374972 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374972 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374972' 00:22:45.405 killing process with pid 1374972 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1374972 00:22:45.405 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1374972 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=1398379 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 1398379 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1398379 ']' 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.664 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1398379 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1398379 ']' 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.923 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.182 null0 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:46.182 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6D5 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.8Rn ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Rn 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iQt 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.gkB ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gkB 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.IbM 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.183 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.J81 ]] 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J81 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JEp 00:22:46.441 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.442 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.821 nvme0n1 00:22:47.821 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.821 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.821 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.079 { 00:22:48.079 "cntlid": 1, 00:22:48.079 "qid": 0, 00:22:48.079 "state": "enabled", 00:22:48.079 "thread": "nvmf_tgt_poll_group_000", 00:22:48.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:48.079 "listen_address": { 00:22:48.079 "trtype": "TCP", 00:22:48.079 "adrfam": "IPv4", 00:22:48.079 "traddr": "10.0.0.2", 00:22:48.079 "trsvcid": "4420" 00:22:48.079 }, 00:22:48.079 "peer_address": { 00:22:48.079 "trtype": "TCP", 00:22:48.079 "adrfam": "IPv4", 00:22:48.079 "traddr": "10.0.0.1", 00:22:48.079 "trsvcid": "50150" 00:22:48.079 }, 00:22:48.079 "auth": { 00:22:48.079 "state": "completed", 00:22:48.079 "digest": "sha512", 00:22:48.079 "dhgroup": "ffdhe8192" 00:22:48.079 } 00:22:48.079 } 00:22:48.079 ]' 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.079 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.337 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:48.337 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.337 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.337 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.337 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.595 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:48.595 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:49.532 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.532 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.532 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.532 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.532 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:49.533 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.791 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.047 request: 00:22:50.047 { 00:22:50.047 "name": "nvme0", 00:22:50.047 "trtype": "tcp", 00:22:50.047 "traddr": "10.0.0.2", 00:22:50.047 "adrfam": "ipv4", 00:22:50.047 "trsvcid": "4420", 00:22:50.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:50.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:50.047 "prchk_reftag": false, 00:22:50.047 "prchk_guard": false, 00:22:50.047 "hdgst": false, 00:22:50.047 "ddgst": false, 00:22:50.047 "dhchap_key": "key3", 00:22:50.047 "allow_unrecognized_csi": false, 00:22:50.047 "method": "bdev_nvme_attach_controller", 00:22:50.047 "req_id": 1 00:22:50.047 } 00:22:50.047 Got JSON-RPC error response 00:22:50.047 response: 00:22:50.047 { 00:22:50.047 "code": -5, 00:22:50.047 "message": "Input/output error" 00:22:50.047 } 00:22:50.047 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:50.047 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.047 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.047 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.048 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:50.048 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:50.048 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:50.048 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:50.306 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:50.306 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:50.306 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.563 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.821 request: 00:22:50.821 { 00:22:50.821 "name": "nvme0", 00:22:50.821 "trtype": "tcp", 00:22:50.821 "traddr": "10.0.0.2", 00:22:50.821 "adrfam": "ipv4", 00:22:50.821 "trsvcid": "4420", 00:22:50.821 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:50.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:50.821 "prchk_reftag": false, 00:22:50.821 "prchk_guard": false, 00:22:50.821 "hdgst": false, 00:22:50.821 "ddgst": false, 00:22:50.821 "dhchap_key": "key3", 00:22:50.821 "allow_unrecognized_csi": false, 00:22:50.821 "method": "bdev_nvme_attach_controller", 00:22:50.821 "req_id": 1 00:22:50.821 } 00:22:50.821 Got JSON-RPC error response 00:22:50.821 response: 00:22:50.821 { 00:22:50.821 "code": -5, 00:22:50.821 "message": "Input/output error" 00:22:50.821 } 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.822 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.080 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.647 request: 00:22:51.647 { 00:22:51.647 "name": "nvme0", 00:22:51.647 "trtype": "tcp", 00:22:51.647 "traddr": "10.0.0.2", 00:22:51.647 "adrfam": "ipv4", 00:22:51.647 "trsvcid": "4420", 00:22:51.647 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:51.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.647 "prchk_reftag": false, 00:22:51.647 "prchk_guard": false, 00:22:51.647 "hdgst": false, 00:22:51.647 "ddgst": false, 00:22:51.647 "dhchap_key": "key0", 00:22:51.647 "dhchap_ctrlr_key": "key1", 00:22:51.647 "allow_unrecognized_csi": false, 00:22:51.647 "method": "bdev_nvme_attach_controller", 00:22:51.647 "req_id": 1 00:22:51.647 } 00:22:51.647 Got JSON-RPC error response 00:22:51.648 response: 00:22:51.648 { 00:22:51.648 "code": -5, 00:22:51.648 "message": "Input/output error" 00:22:51.648 } 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:51.648 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:52.216 nvme0n1 00:22:52.216 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:52.216 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:52.216 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.475 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.475 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.475 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:52.733 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:54.110 nvme0n1 00:22:54.110 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:54.110 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:54.110 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:54.368 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.626 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.626 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:54.626 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: --dhchap-ctrl-secret DHHC-1:03:NmI0YTZiMjg0Mzc2NTY5ZjIyODQ2OWVmYWRhOWZmYmI5Mjg1NWUwZGFlN2NhNWJhOTNkNWRjOTZlMTk4OTQ3MYxiaOU=: 00:22:55.562 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.563 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:55.821 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:56.754 request: 00:22:56.754 { 00:22:56.754 "name": "nvme0", 00:22:56.754 "trtype": "tcp", 00:22:56.754 "traddr": "10.0.0.2", 00:22:56.754 "adrfam": "ipv4", 00:22:56.754 "trsvcid": "4420", 00:22:56.754 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:56.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:56.754 "prchk_reftag": false, 00:22:56.754 "prchk_guard": false, 00:22:56.754 "hdgst": false, 00:22:56.754 "ddgst": false, 00:22:56.754 "dhchap_key": "key1", 00:22:56.754 "allow_unrecognized_csi": false, 00:22:56.754 "method": "bdev_nvme_attach_controller", 00:22:56.754 "req_id": 1 00:22:56.754 } 00:22:56.754 Got JSON-RPC error response 00:22:56.754 response: 00:22:56.754 { 00:22:56.754 "code": -5, 00:22:56.754 "message": "Input/output error" 00:22:56.754 } 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.754 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:58.135 nvme0n1 00:22:58.395 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:58.395 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:58.395 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.653 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.653 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.653 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:58.912 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:59.170 nvme0n1 00:22:59.170 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:59.170 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:59.170 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.428 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.428 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.428 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.686 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:59.686 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.686 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: '' 2s 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: ]] 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzZhYzU5MDA2MTFiMWQyMzk1MDAxNmZmMzVkZjA3ZjHc6Dc9: 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:59.945 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: 2s 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: ]] 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmQ0YzhiNTA0N2FiMjk2OGQ1MzZlN2ZkNmViNGI3ZWRjNjJhYjA1MDcwYTRmMzIyyBZewQ==: 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:01.851 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:03.755 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.013 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:05.392 nvme0n1 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.392 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:06.330 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:06.330 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:06.330 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:06.588 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:06.846 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:06.846 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:06.846 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:07.104 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:08.041 request: 00:23:08.041 { 00:23:08.041 "name": "nvme0", 00:23:08.041 "dhchap_key": "key1", 00:23:08.041 "dhchap_ctrlr_key": "key3", 00:23:08.041 "method": "bdev_nvme_set_keys", 00:23:08.041 "req_id": 1 00:23:08.041 } 00:23:08.041 Got JSON-RPC error response 00:23:08.041 response: 00:23:08.041 { 00:23:08.041 "code": -13, 00:23:08.041 "message": "Permission denied" 00:23:08.041 } 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.041 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:08.298 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:08.299 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:09.295 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:09.295 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:09.295 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:09.554 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.458 nvme0n1 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:11.458 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:12.025 request: 00:23:12.025 { 00:23:12.025 "name": "nvme0", 00:23:12.025 "dhchap_key": "key2", 00:23:12.025 "dhchap_ctrlr_key": "key0", 00:23:12.025 "method": "bdev_nvme_set_keys", 00:23:12.025 "req_id": 1 00:23:12.025 } 00:23:12.025 Got JSON-RPC error response 00:23:12.025 response: 00:23:12.025 { 00:23:12.025 "code": -13, 00:23:12.025 "message": "Permission denied" 00:23:12.025 } 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.025 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:12.284 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:12.284 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:13.223 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:13.223 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:13.223 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1375078 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1375078 ']' 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1375078 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1375078 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1375078' 00:23:13.792 killing process with pid 1375078 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1375078 00:23:13.792 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1375078 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.051 rmmod nvme_tcp 00:23:14.051 rmmod nvme_fabrics 00:23:14.051 rmmod nvme_keyring 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 1398379 ']' 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 1398379 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1398379 ']' 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1398379 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.051 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398379 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398379' 00:23:14.311 killing process with pid 1398379 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1398379 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1398379 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:14.311 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:14.571 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.571 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.571 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.571 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.571 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6D5 /tmp/spdk.key-sha256.iQt /tmp/spdk.key-sha384.IbM /tmp/spdk.key-sha512.JEp /tmp/spdk.key-sha512.8Rn /tmp/spdk.key-sha384.gkB /tmp/spdk.key-sha256.J81 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:16.476 00:23:16.476 real 3m40.951s 00:23:16.476 user 8m36.760s 00:23:16.476 sys 0m27.708s 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.476 ************************************ 00:23:16.476 END TEST nvmf_auth_target 00:23:16.476 ************************************ 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.476 ************************************ 00:23:16.476 START TEST nvmf_bdevio_no_huge 00:23:16.476 ************************************ 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:16.476 * Looking for test storage... 00:23:16.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:16.476 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.734 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:16.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.735 --rc genhtml_branch_coverage=1 00:23:16.735 --rc genhtml_function_coverage=1 00:23:16.735 --rc genhtml_legend=1 00:23:16.735 --rc geninfo_all_blocks=1 00:23:16.735 --rc geninfo_unexecuted_blocks=1 00:23:16.735 00:23:16.735 ' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:16.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.735 --rc genhtml_branch_coverage=1 00:23:16.735 --rc genhtml_function_coverage=1 00:23:16.735 --rc genhtml_legend=1 00:23:16.735 --rc geninfo_all_blocks=1 00:23:16.735 --rc geninfo_unexecuted_blocks=1 00:23:16.735 00:23:16.735 ' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:16.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.735 --rc genhtml_branch_coverage=1 00:23:16.735 --rc genhtml_function_coverage=1 00:23:16.735 --rc genhtml_legend=1 00:23:16.735 --rc geninfo_all_blocks=1 00:23:16.735 --rc geninfo_unexecuted_blocks=1 00:23:16.735 00:23:16.735 ' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:16.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.735 --rc genhtml_branch_coverage=1 00:23:16.735 --rc genhtml_function_coverage=1 00:23:16.735 --rc genhtml_legend=1 00:23:16.735 --rc geninfo_all_blocks=1 00:23:16.735 --rc geninfo_unexecuted_blocks=1 00:23:16.735 00:23:16.735 ' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:16.735 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.736 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:18.638 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:18.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:18.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:18.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:18.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:23:18.639 00:23:18.639 --- 10.0.0.2 ping statistics --- 00:23:18.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.639 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:23:18.639 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:23:18.898 00:23:18.898 --- 10.0.0.1 ping statistics --- 00:23:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.898 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=1404313 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 1404313 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1404313 ']' 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.898 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.898 [2024-11-02 14:39:10.776598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:18.898 [2024-11-02 14:39:10.776711] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:18.898 [2024-11-02 14:39:10.848468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.898 [2024-11-02 14:39:10.939572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.898 [2024-11-02 14:39:10.939632] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.898 [2024-11-02 14:39:10.939648] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.898 [2024-11-02 14:39:10.939662] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.898 [2024-11-02 14:39:10.939679] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.898 [2024-11-02 14:39:10.939802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.898 [2024-11-02 14:39:10.939867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:18.898 [2024-11-02 14:39:10.939941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:18.898 [2024-11-02 14:39:10.939943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 [2024-11-02 14:39:11.088109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 Malloc0 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:19.157 [2024-11-02 14:39:11.126122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:19.157 { 00:23:19.157 "params": { 00:23:19.157 "name": "Nvme$subsystem", 00:23:19.157 "trtype": "$TEST_TRANSPORT", 00:23:19.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.157 "adrfam": "ipv4", 00:23:19.157 "trsvcid": "$NVMF_PORT", 00:23:19.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.157 "hdgst": ${hdgst:-false}, 00:23:19.157 "ddgst": ${ddgst:-false} 00:23:19.157 }, 00:23:19.157 "method": "bdev_nvme_attach_controller" 00:23:19.157 } 00:23:19.157 EOF 00:23:19.157 )") 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:19.157 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:19.157 "params": { 00:23:19.157 "name": "Nvme1", 00:23:19.157 "trtype": "tcp", 00:23:19.157 "traddr": "10.0.0.2", 00:23:19.157 "adrfam": "ipv4", 00:23:19.157 "trsvcid": "4420", 00:23:19.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.157 "hdgst": false, 00:23:19.157 "ddgst": false 00:23:19.157 }, 00:23:19.157 "method": "bdev_nvme_attach_controller" 00:23:19.157 }' 00:23:19.157 [2024-11-02 14:39:11.172419] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:19.157 [2024-11-02 14:39:11.172499] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1404411 ] 00:23:19.417 [2024-11-02 14:39:11.233537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:19.417 [2024-11-02 14:39:11.323655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.417 [2024-11-02 14:39:11.323707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.417 [2024-11-02 14:39:11.323711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.677 I/O targets: 00:23:19.677 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:19.677 00:23:19.677 00:23:19.677 CUnit - A unit testing framework for C - Version 2.1-3 00:23:19.677 http://cunit.sourceforge.net/ 00:23:19.677 00:23:19.677 00:23:19.677 Suite: bdevio tests on: Nvme1n1 00:23:19.678 Test: blockdev write read block ...passed 00:23:19.678 Test: blockdev write zeroes read block ...passed 00:23:19.678 Test: blockdev write zeroes read no split ...passed 00:23:19.678 Test: blockdev write zeroes read split ...passed 00:23:19.936 Test: blockdev write zeroes read split partial ...passed 00:23:19.936 Test: blockdev reset ...[2024-11-02 14:39:11.760736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.936 [2024-11-02 14:39:11.760855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7700 (9): Bad file descriptor 00:23:19.936 [2024-11-02 14:39:11.820900] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.936 passed 00:23:19.936 Test: blockdev write read 8 blocks ...passed 00:23:19.936 Test: blockdev write read size > 128k ...passed 00:23:19.936 Test: blockdev write read invalid size ...passed 00:23:19.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:19.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:19.936 Test: blockdev write read max offset ...passed 00:23:19.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:19.936 Test: blockdev writev readv 8 blocks ...passed 00:23:19.936 Test: blockdev writev readv 30 x 1block ...passed 00:23:20.195 Test: blockdev writev readv block ...passed 00:23:20.195 Test: blockdev writev readv size > 128k ...passed 00:23:20.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:20.195 Test: blockdev comparev and writev ...[2024-11-02 14:39:12.034678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.034750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.034769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.035167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.035193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.035216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.035232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.035623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.035647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.035670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.035686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.036076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.036101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.036122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:20.195 [2024-11-02 14:39:12.036137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:20.195 passed 00:23:20.195 Test: blockdev nvme passthru rw ...passed 00:23:20.195 Test: blockdev nvme passthru vendor specific ...[2024-11-02 14:39:12.118633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.195 [2024-11-02 14:39:12.118662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.118861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.195 [2024-11-02 14:39:12.118885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.119083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.195 [2024-11-02 14:39:12.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:20.195 [2024-11-02 14:39:12.119321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.195 [2024-11-02 14:39:12.119345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:20.195 passed 00:23:20.195 Test: blockdev nvme admin passthru ...passed 00:23:20.195 Test: blockdev copy ...passed 00:23:20.195 00:23:20.195 Run Summary: Type Total Ran Passed Failed Inactive 00:23:20.195 suites 1 1 n/a 0 0 00:23:20.195 tests 23 23 23 0 0 00:23:20.195 asserts 152 152 152 0 n/a 00:23:20.195 00:23:20.195 Elapsed time = 1.238 seconds 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:20.759 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.760 rmmod nvme_tcp 00:23:20.760 rmmod nvme_fabrics 00:23:20.760 rmmod nvme_keyring 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 1404313 ']' 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 1404313 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1404313 ']' 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1404313 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1404313 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1404313' 00:23:20.760 killing process with pid 1404313 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1404313 00:23:20.760 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1404313 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.019 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.561 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.561 00:23:23.561 real 0m6.617s 00:23:23.561 user 0m10.896s 00:23:23.562 sys 0m2.631s 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:23.562 ************************************ 00:23:23.562 END TEST nvmf_bdevio_no_huge 00:23:23.562 ************************************ 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.562 ************************************ 00:23:23.562 START TEST nvmf_tls 00:23:23.562 ************************************ 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:23.562 * Looking for test storage... 00:23:23.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.562 --rc genhtml_branch_coverage=1 00:23:23.562 --rc genhtml_function_coverage=1 00:23:23.562 --rc genhtml_legend=1 00:23:23.562 --rc geninfo_all_blocks=1 00:23:23.562 --rc geninfo_unexecuted_blocks=1 00:23:23.562 00:23:23.562 ' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.562 --rc genhtml_branch_coverage=1 00:23:23.562 --rc genhtml_function_coverage=1 00:23:23.562 --rc genhtml_legend=1 00:23:23.562 --rc geninfo_all_blocks=1 00:23:23.562 --rc geninfo_unexecuted_blocks=1 00:23:23.562 00:23:23.562 ' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.562 --rc genhtml_branch_coverage=1 00:23:23.562 --rc genhtml_function_coverage=1 00:23:23.562 --rc genhtml_legend=1 00:23:23.562 --rc geninfo_all_blocks=1 00:23:23.562 --rc geninfo_unexecuted_blocks=1 00:23:23.562 00:23:23.562 ' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.562 --rc genhtml_branch_coverage=1 00:23:23.562 --rc genhtml_function_coverage=1 00:23:23.562 --rc genhtml_legend=1 00:23:23.562 --rc geninfo_all_blocks=1 00:23:23.562 --rc geninfo_unexecuted_blocks=1 00:23:23.562 00:23:23.562 ' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.562 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.563 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:25.466 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.466 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:25.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:25.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:25.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:25.467 00:23:25.467 --- 10.0.0.2 ping statistics --- 00:23:25.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.467 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:23:25.467 00:23:25.467 --- 10.0.0.1 ping statistics --- 00:23:25.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.467 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1406494 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1406494 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1406494 ']' 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.467 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.725 [2024-11-02 14:39:17.568077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:25.725 [2024-11-02 14:39:17.568161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.725 [2024-11-02 14:39:17.644926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.725 [2024-11-02 14:39:17.738782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.725 [2024-11-02 14:39:17.738845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.725 [2024-11-02 14:39:17.738861] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.725 [2024-11-02 14:39:17.738874] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.725 [2024-11-02 14:39:17.738886] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.725 [2024-11-02 14:39:17.738928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:25.984 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:26.242 true 00:23:26.242 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.242 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:26.501 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:26.501 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:26.501 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:26.761 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.761 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:27.020 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:27.020 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:27.020 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:27.278 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:27.278 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:27.536 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:27.536 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:27.536 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:27.536 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:27.800 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:27.800 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:27.800 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:28.372 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:28.372 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:28.372 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:28.372 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:28.372 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:28.630 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:28.630 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:28.889 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:29.147 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Qtb9B8GO03 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.qJAxBsBD4K 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Qtb9B8GO03 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.qJAxBsBD4K 00:23:29.147 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:29.405 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:29.663 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Qtb9B8GO03 00:23:29.663 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qtb9B8GO03 00:23:29.663 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.923 [2024-11-02 14:39:21.901290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.923 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.182 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.442 [2024-11-02 14:39:22.430717] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.442 [2024-11-02 14:39:22.430978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.442 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.702 malloc0 00:23:30.702 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.960 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qtb9B8GO03 00:23:31.526 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.785 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Qtb9B8GO03 00:23:41.769 Initializing NVMe Controllers 00:23:41.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.769 Initialization complete. Launching workers. 00:23:41.769 ======================================================== 00:23:41.769 Latency(us) 00:23:41.769 Device Information : IOPS MiB/s Average min max 00:23:41.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7512.18 29.34 8522.01 1248.69 9331.71 00:23:41.769 ======================================================== 00:23:41.769 Total : 7512.18 29.34 8522.01 1248.69 9331.71 00:23:41.769 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qtb9B8GO03 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qtb9B8GO03 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1408463 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1408463 /var/tmp/bdevperf.sock 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1408463 ']' 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.769 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.769 [2024-11-02 14:39:33.779095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:41.769 [2024-11-02 14:39:33.779174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408463 ] 00:23:42.028 [2024-11-02 14:39:33.839945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.028 [2024-11-02 14:39:33.926233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.028 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.028 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:42.028 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qtb9B8GO03 00:23:42.312 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.595 [2024-11-02 14:39:34.550630] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.595 TLSTESTn1 00:23:42.595 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.853 Running I/O for 10 seconds... 00:23:44.726 2343.00 IOPS, 9.15 MiB/s [2024-11-02T13:39:38.162Z] 2420.50 IOPS, 9.46 MiB/s [2024-11-02T13:39:39.099Z] 2444.00 IOPS, 9.55 MiB/s [2024-11-02T13:39:40.035Z] 2458.50 IOPS, 9.60 MiB/s [2024-11-02T13:39:40.970Z] 2448.00 IOPS, 9.56 MiB/s [2024-11-02T13:39:41.907Z] 2438.83 IOPS, 9.53 MiB/s [2024-11-02T13:39:42.863Z] 2436.00 IOPS, 9.52 MiB/s [2024-11-02T13:39:43.799Z] 2442.00 IOPS, 9.54 MiB/s [2024-11-02T13:39:45.175Z] 2443.00 IOPS, 9.54 MiB/s [2024-11-02T13:39:45.175Z] 2442.80 IOPS, 9.54 MiB/s 00:23:53.120 Latency(us) 00:23:53.120 [2024-11-02T13:39:45.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.120 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.120 Verification LBA range: start 0x0 length 0x2000 00:23:53.120 TLSTESTn1 : 10.06 2441.42 9.54 0.00 0.00 52272.83 8107.05 83109.36 00:23:53.120 [2024-11-02T13:39:45.175Z] =================================================================================================================== 00:23:53.120 [2024-11-02T13:39:45.175Z] Total : 2441.42 9.54 0.00 0.00 52272.83 8107.05 83109.36 00:23:53.120 { 00:23:53.120 "results": [ 00:23:53.120 { 00:23:53.120 "job": "TLSTESTn1", 00:23:53.120 "core_mask": "0x4", 00:23:53.120 "workload": "verify", 00:23:53.120 "status": "finished", 00:23:53.120 "verify_range": { 00:23:53.120 "start": 0, 00:23:53.120 "length": 8192 00:23:53.120 }, 00:23:53.120 "queue_depth": 128, 00:23:53.120 "io_size": 4096, 00:23:53.120 "runtime": 10.057682, 00:23:53.120 "iops": 2441.417416060679, 00:23:53.120 "mibps": 9.536786781487027, 00:23:53.120 "io_failed": 0, 00:23:53.120 "io_timeout": 0, 00:23:53.120 "avg_latency_us": 52272.83265304645, 00:23:53.120 "min_latency_us": 8107.045925925926, 00:23:53.120 "max_latency_us": 83109.35703703704 00:23:53.120 } 00:23:53.120 ], 00:23:53.120 "core_count": 1 00:23:53.120 } 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1408463 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1408463 ']' 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1408463 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1408463 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1408463' 00:23:53.120 killing process with pid 1408463 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1408463 00:23:53.120 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.120 00:23:53.120 Latency(us) 00:23:53.120 [2024-11-02T13:39:45.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.120 [2024-11-02T13:39:45.175Z] =================================================================================================================== 00:23:53.120 [2024-11-02T13:39:45.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.120 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1408463 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qJAxBsBD4K 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qJAxBsBD4K 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qJAxBsBD4K 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qJAxBsBD4K 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409726 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409726 /var/tmp/bdevperf.sock 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1409726 ']' 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.120 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.378 [2024-11-02 14:39:45.191110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:53.379 [2024-11-02 14:39:45.191190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409726 ] 00:23:53.379 [2024-11-02 14:39:45.252429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.379 [2024-11-02 14:39:45.336019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.638 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.638 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.638 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qJAxBsBD4K 00:23:53.898 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.898 [2024-11-02 14:39:45.943589] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.898 [2024-11-02 14:39:45.949402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:53.898 [2024-11-02 14:39:45.949894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae60 (107): Transport endpoint is not connected 00:23:53.898 [2024-11-02 14:39:45.950883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae60 (9): Bad file descriptor 00:23:53.898 [2024-11-02 14:39:45.951882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.898 [2024-11-02 14:39:45.951903] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:53.898 [2024-11-02 14:39:45.951931] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:53.898 [2024-11-02 14:39:45.951950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.158 request: 00:23:54.158 { 00:23:54.158 "name": "TLSTEST", 00:23:54.158 "trtype": "tcp", 00:23:54.158 "traddr": "10.0.0.2", 00:23:54.158 "adrfam": "ipv4", 00:23:54.158 "trsvcid": "4420", 00:23:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.158 "prchk_reftag": false, 00:23:54.158 "prchk_guard": false, 00:23:54.158 "hdgst": false, 00:23:54.158 "ddgst": false, 00:23:54.158 "psk": "key0", 00:23:54.158 "allow_unrecognized_csi": false, 00:23:54.158 "method": "bdev_nvme_attach_controller", 00:23:54.158 "req_id": 1 00:23:54.158 } 00:23:54.158 Got JSON-RPC error response 00:23:54.158 response: 00:23:54.158 { 00:23:54.158 "code": -5, 00:23:54.158 "message": "Input/output error" 00:23:54.158 } 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1409726 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1409726 ']' 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1409726 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.158 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409726 00:23:54.158 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:54.158 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:54.158 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409726' 00:23:54.158 killing process with pid 1409726 00:23:54.158 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1409726 00:23:54.158 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.158 00:23:54.158 Latency(us) 00:23:54.158 [2024-11-02T13:39:46.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.158 [2024-11-02T13:39:46.213Z] =================================================================================================================== 00:23:54.158 [2024-11-02T13:39:46.213Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:54.158 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1409726 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qtb9B8GO03 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qtb9B8GO03 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qtb9B8GO03 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qtb9B8GO03 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409855 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409855 /var/tmp/bdevperf.sock 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1409855 ']' 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.417 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 [2024-11-02 14:39:46.291166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:54.417 [2024-11-02 14:39:46.291284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409855 ] 00:23:54.417 [2024-11-02 14:39:46.357042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.417 [2024-11-02 14:39:46.444428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.675 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.675 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.675 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qtb9B8GO03 00:23:54.932 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:55.192 [2024-11-02 14:39:47.063577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.192 [2024-11-02 14:39:47.070468] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:55.192 [2024-11-02 14:39:47.070498] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:55.192 [2024-11-02 14:39:47.070551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.192 [2024-11-02 14:39:47.070703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b4e60 (107): Transport endpoint is not connected 00:23:55.192 [2024-11-02 14:39:47.071691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b4e60 (9): Bad file descriptor 00:23:55.192 [2024-11-02 14:39:47.072690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.192 [2024-11-02 14:39:47.072709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:55.192 [2024-11-02 14:39:47.072738] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:55.192 [2024-11-02 14:39:47.072756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.192 request: 00:23:55.192 { 00:23:55.192 "name": "TLSTEST", 00:23:55.192 "trtype": "tcp", 00:23:55.192 "traddr": "10.0.0.2", 00:23:55.192 "adrfam": "ipv4", 00:23:55.192 "trsvcid": "4420", 00:23:55.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.192 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:55.192 "prchk_reftag": false, 00:23:55.192 "prchk_guard": false, 00:23:55.192 "hdgst": false, 00:23:55.192 "ddgst": false, 00:23:55.192 "psk": "key0", 00:23:55.192 "allow_unrecognized_csi": false, 00:23:55.192 "method": "bdev_nvme_attach_controller", 00:23:55.192 "req_id": 1 00:23:55.192 } 00:23:55.192 Got JSON-RPC error response 00:23:55.192 response: 00:23:55.192 { 00:23:55.192 "code": -5, 00:23:55.192 "message": "Input/output error" 00:23:55.192 } 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1409855 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1409855 ']' 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1409855 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409855 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409855' 00:23:55.192 killing process with pid 1409855 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1409855 00:23:55.192 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.192 00:23:55.192 Latency(us) 00:23:55.192 [2024-11-02T13:39:47.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.192 [2024-11-02T13:39:47.247Z] =================================================================================================================== 00:23:55.192 [2024-11-02T13:39:47.247Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.192 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1409855 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qtb9B8GO03 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qtb9B8GO03 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qtb9B8GO03 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qtb9B8GO03 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409994 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409994 /var/tmp/bdevperf.sock 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1409994 ']' 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.451 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.451 [2024-11-02 14:39:47.408172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:55.451 [2024-11-02 14:39:47.408284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409994 ] 00:23:55.451 [2024-11-02 14:39:47.468034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.709 [2024-11-02 14:39:47.554233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.709 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.709 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.709 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qtb9B8GO03 00:23:55.967 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.225 [2024-11-02 14:39:48.181371] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.225 [2024-11-02 14:39:48.193346] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:56.225 [2024-11-02 14:39:48.193375] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:56.225 [2024-11-02 14:39:48.193426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:56.225 [2024-11-02 14:39:48.194493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8e60 (107): Transport endpoint is not connected 00:23:56.225 [2024-11-02 14:39:48.195486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8e60 (9): Bad file descriptor 00:23:56.225 [2024-11-02 14:39:48.196484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:56.225 [2024-11-02 14:39:48.196506] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:56.225 [2024-11-02 14:39:48.196520] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:56.225 [2024-11-02 14:39:48.196539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:56.225 request: 00:23:56.225 { 00:23:56.225 "name": "TLSTEST", 00:23:56.225 "trtype": "tcp", 00:23:56.225 "traddr": "10.0.0.2", 00:23:56.225 "adrfam": "ipv4", 00:23:56.225 "trsvcid": "4420", 00:23:56.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.225 "prchk_reftag": false, 00:23:56.225 "prchk_guard": false, 00:23:56.225 "hdgst": false, 00:23:56.225 "ddgst": false, 00:23:56.225 "psk": "key0", 00:23:56.225 "allow_unrecognized_csi": false, 00:23:56.225 "method": "bdev_nvme_attach_controller", 00:23:56.225 "req_id": 1 00:23:56.225 } 00:23:56.225 Got JSON-RPC error response 00:23:56.225 response: 00:23:56.225 { 00:23:56.225 "code": -5, 00:23:56.225 "message": "Input/output error" 00:23:56.225 } 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1409994 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1409994 ']' 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1409994 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409994 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409994' 00:23:56.225 killing process with pid 1409994 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1409994 00:23:56.225 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.225 00:23:56.225 Latency(us) 00:23:56.225 [2024-11-02T13:39:48.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.225 [2024-11-02T13:39:48.280Z] =================================================================================================================== 00:23:56.225 [2024-11-02T13:39:48.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.225 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1409994 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1410135 00:23:56.483 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1410135 /var/tmp/bdevperf.sock 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1410135 ']' 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.484 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.484 [2024-11-02 14:39:48.485091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:56.484 [2024-11-02 14:39:48.485166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410135 ] 00:23:56.741 [2024-11-02 14:39:48.547041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.741 [2024-11-02 14:39:48.635826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.741 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.741 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.741 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:56.999 [2024-11-02 14:39:49.002110] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:56.999 [2024-11-02 14:39:49.002160] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.999 request: 00:23:56.999 { 00:23:56.999 "name": "key0", 00:23:56.999 "path": "", 00:23:56.999 "method": "keyring_file_add_key", 00:23:56.999 "req_id": 1 00:23:56.999 } 00:23:56.999 Got JSON-RPC error response 00:23:56.999 response: 00:23:56.999 { 00:23:56.999 "code": -1, 00:23:56.999 "message": "Operation not permitted" 00:23:56.999 } 00:23:56.999 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:57.257 [2024-11-02 14:39:49.266923] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.257 [2024-11-02 14:39:49.266975] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:57.257 request: 00:23:57.257 { 00:23:57.257 "name": "TLSTEST", 00:23:57.257 "trtype": "tcp", 00:23:57.257 "traddr": "10.0.0.2", 00:23:57.257 "adrfam": "ipv4", 00:23:57.257 "trsvcid": "4420", 00:23:57.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.257 "prchk_reftag": false, 00:23:57.257 "prchk_guard": false, 00:23:57.257 "hdgst": false, 00:23:57.257 "ddgst": false, 00:23:57.257 "psk": "key0", 00:23:57.257 "allow_unrecognized_csi": false, 00:23:57.257 "method": "bdev_nvme_attach_controller", 00:23:57.257 "req_id": 1 00:23:57.257 } 00:23:57.257 Got JSON-RPC error response 00:23:57.257 response: 00:23:57.257 { 00:23:57.257 "code": -126, 00:23:57.257 "message": "Required key not available" 00:23:57.257 } 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1410135 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1410135 ']' 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1410135 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.257 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410135 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410135' 00:23:57.515 killing process with pid 1410135 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1410135 00:23:57.515 Received shutdown signal, test time was about 10.000000 seconds 00:23:57.515 00:23:57.515 Latency(us) 00:23:57.515 [2024-11-02T13:39:49.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.515 [2024-11-02T13:39:49.570Z] =================================================================================================================== 00:23:57.515 [2024-11-02T13:39:49.570Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1410135 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1406494 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1406494 ']' 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1406494 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.515 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406494 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406494' 00:23:57.772 killing process with pid 1406494 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1406494 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1406494 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:23:57.772 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.wYcaPlBIdj 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.wYcaPlBIdj 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1410403 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1410403 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1410403 ']' 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.030 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 [2024-11-02 14:39:49.923735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:58.030 [2024-11-02 14:39:49.923819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.030 [2024-11-02 14:39:49.989413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.030 [2024-11-02 14:39:50.079759] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.030 [2024-11-02 14:39:50.079821] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.030 [2024-11-02 14:39:50.079851] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.030 [2024-11-02 14:39:50.079862] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.030 [2024-11-02 14:39:50.079871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.030 [2024-11-02 14:39:50.079913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYcaPlBIdj 00:23:58.288 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.545 [2024-11-02 14:39:50.477394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.545 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.803 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:59.061 [2024-11-02 14:39:51.062970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.061 [2024-11-02 14:39:51.063253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.061 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:59.628 malloc0 00:23:59.628 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:59.887 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:00.145 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYcaPlBIdj 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wYcaPlBIdj 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1410698 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1410698 /var/tmp/bdevperf.sock 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1410698 ']' 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.403 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.403 [2024-11-02 14:39:52.386976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:00.403 [2024-11-02 14:39:52.387053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410698 ] 00:24:00.403 [2024-11-02 14:39:52.444101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.661 [2024-11-02 14:39:52.527539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.661 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.661 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.661 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:00.918 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.176 [2024-11-02 14:39:53.207652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.434 TLSTESTn1 00:24:01.434 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:01.434 Running I/O for 10 seconds... 00:24:03.747 3054.00 IOPS, 11.93 MiB/s [2024-11-02T13:39:56.741Z] 3108.00 IOPS, 12.14 MiB/s [2024-11-02T13:39:57.678Z] 3164.67 IOPS, 12.36 MiB/s [2024-11-02T13:39:58.616Z] 3210.25 IOPS, 12.54 MiB/s [2024-11-02T13:39:59.551Z] 3232.20 IOPS, 12.63 MiB/s [2024-11-02T13:40:00.492Z] 3229.00 IOPS, 12.61 MiB/s [2024-11-02T13:40:01.504Z] 3233.43 IOPS, 12.63 MiB/s [2024-11-02T13:40:02.443Z] 3240.25 IOPS, 12.66 MiB/s [2024-11-02T13:40:03.823Z] 3240.33 IOPS, 12.66 MiB/s [2024-11-02T13:40:03.823Z] 3254.60 IOPS, 12.71 MiB/s 00:24:11.768 Latency(us) 00:24:11.768 [2024-11-02T13:40:03.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.768 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.768 Verification LBA range: start 0x0 length 0x2000 00:24:11.768 TLSTESTn1 : 10.04 3254.65 12.71 0.00 0.00 39233.60 9369.22 55535.69 00:24:11.768 [2024-11-02T13:40:03.823Z] =================================================================================================================== 00:24:11.768 [2024-11-02T13:40:03.823Z] Total : 3254.65 12.71 0.00 0.00 39233.60 9369.22 55535.69 00:24:11.768 { 00:24:11.768 "results": [ 00:24:11.768 { 00:24:11.768 "job": "TLSTESTn1", 00:24:11.768 "core_mask": "0x4", 00:24:11.768 "workload": "verify", 00:24:11.768 "status": "finished", 00:24:11.768 "verify_range": { 00:24:11.768 "start": 0, 00:24:11.768 "length": 8192 00:24:11.768 }, 00:24:11.768 "queue_depth": 128, 00:24:11.768 "io_size": 4096, 00:24:11.768 "runtime": 10.038878, 00:24:11.768 "iops": 3254.6465850068107, 00:24:11.768 "mibps": 12.713463222682854, 00:24:11.768 "io_failed": 0, 00:24:11.768 "io_timeout": 0, 00:24:11.768 "avg_latency_us": 39233.60488721574, 00:24:11.768 "min_latency_us": 9369.22074074074, 00:24:11.768 "max_latency_us": 55535.69185185185 00:24:11.768 } 00:24:11.768 ], 00:24:11.768 "core_count": 1 00:24:11.768 } 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1410698 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1410698 ']' 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1410698 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410698 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410698' 00:24:11.768 killing process with pid 1410698 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1410698 00:24:11.768 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.768 00:24:11.768 Latency(us) 00:24:11.768 [2024-11-02T13:40:03.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.768 [2024-11-02T13:40:03.823Z] =================================================================================================================== 00:24:11.768 [2024-11-02T13:40:03.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1410698 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.wYcaPlBIdj 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYcaPlBIdj 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYcaPlBIdj 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYcaPlBIdj 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wYcaPlBIdj 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1412013 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.768 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1412013 /var/tmp/bdevperf.sock 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1412013 ']' 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.769 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.769 [2024-11-02 14:40:03.802400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:11.769 [2024-11-02 14:40:03.802484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412013 ] 00:24:12.027 [2024-11-02 14:40:03.863079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.027 [2024-11-02 14:40:03.947780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.027 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.027 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.027 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:12.285 [2024-11-02 14:40:04.303839] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wYcaPlBIdj': 0100666 00:24:12.285 [2024-11-02 14:40:04.303884] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:12.285 request: 00:24:12.285 { 00:24:12.285 "name": "key0", 00:24:12.285 "path": "/tmp/tmp.wYcaPlBIdj", 00:24:12.285 "method": "keyring_file_add_key", 00:24:12.285 "req_id": 1 00:24:12.285 } 00:24:12.285 Got JSON-RPC error response 00:24:12.285 response: 00:24:12.285 { 00:24:12.285 "code": -1, 00:24:12.285 "message": "Operation not permitted" 00:24:12.285 } 00:24:12.285 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.543 [2024-11-02 14:40:04.572694] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.543 [2024-11-02 14:40:04.572746] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:12.543 request: 00:24:12.543 { 00:24:12.543 "name": "TLSTEST", 00:24:12.543 "trtype": "tcp", 00:24:12.543 "traddr": "10.0.0.2", 00:24:12.543 "adrfam": "ipv4", 00:24:12.543 "trsvcid": "4420", 00:24:12.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.543 "prchk_reftag": false, 00:24:12.543 "prchk_guard": false, 00:24:12.543 "hdgst": false, 00:24:12.543 "ddgst": false, 00:24:12.543 "psk": "key0", 00:24:12.543 "allow_unrecognized_csi": false, 00:24:12.543 "method": "bdev_nvme_attach_controller", 00:24:12.543 "req_id": 1 00:24:12.543 } 00:24:12.543 Got JSON-RPC error response 00:24:12.543 response: 00:24:12.543 { 00:24:12.543 "code": -126, 00:24:12.543 "message": "Required key not available" 00:24:12.543 } 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1412013 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1412013 ']' 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1412013 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.543 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412013 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412013' 00:24:12.801 killing process with pid 1412013 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1412013 00:24:12.801 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.801 00:24:12.801 Latency(us) 00:24:12.801 [2024-11-02T13:40:04.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.801 [2024-11-02T13:40:04.856Z] =================================================================================================================== 00:24:12.801 [2024-11-02T13:40:04.856Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1412013 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1410403 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1410403 ']' 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1410403 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.801 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410403 00:24:13.060 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:13.060 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:13.060 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410403' 00:24:13.060 killing process with pid 1410403 00:24:13.060 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1410403 00:24:13.060 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1410403 00:24:13.060 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:13.060 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:13.060 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.060 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1412167 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1412167 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1412167 ']' 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.319 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.319 [2024-11-02 14:40:05.170724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:13.319 [2024-11-02 14:40:05.170816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.319 [2024-11-02 14:40:05.239767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.319 [2024-11-02 14:40:05.326879] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.319 [2024-11-02 14:40:05.326946] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.319 [2024-11-02 14:40:05.326962] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.319 [2024-11-02 14:40:05.326976] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.319 [2024-11-02 14:40:05.326988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.319 [2024-11-02 14:40:05.327020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.577 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:13.578 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.578 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:24:13.578 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYcaPlBIdj 00:24:13.578 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.836 [2024-11-02 14:40:05.720415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.836 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:14.093 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:14.351 [2024-11-02 14:40:06.257860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.351 [2024-11-02 14:40:06.258122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.351 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.609 malloc0 00:24:14.609 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.866 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:15.124 [2024-11-02 14:40:07.080413] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wYcaPlBIdj': 0100666 00:24:15.124 [2024-11-02 14:40:07.080464] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:15.124 request: 00:24:15.124 { 00:24:15.124 "name": "key0", 00:24:15.124 "path": "/tmp/tmp.wYcaPlBIdj", 00:24:15.124 "method": "keyring_file_add_key", 00:24:15.124 "req_id": 1 00:24:15.124 } 00:24:15.124 Got JSON-RPC error response 00:24:15.124 response: 00:24:15.124 { 00:24:15.124 "code": -1, 00:24:15.124 "message": "Operation not permitted" 00:24:15.124 } 00:24:15.124 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.383 [2024-11-02 14:40:07.349172] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:15.383 [2024-11-02 14:40:07.349227] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:15.383 request: 00:24:15.383 { 00:24:15.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.383 "host": "nqn.2016-06.io.spdk:host1", 00:24:15.383 "psk": "key0", 00:24:15.383 "method": "nvmf_subsystem_add_host", 00:24:15.383 "req_id": 1 00:24:15.383 } 00:24:15.383 Got JSON-RPC error response 00:24:15.383 response: 00:24:15.383 { 00:24:15.383 "code": -32603, 00:24:15.383 "message": "Internal error" 00:24:15.383 } 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1412167 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1412167 ']' 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1412167 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412167 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.383 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.384 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412167' 00:24:15.384 killing process with pid 1412167 00:24:15.384 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1412167 00:24:15.384 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1412167 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.wYcaPlBIdj 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1412466 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1412466 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1412466 ']' 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.642 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.900 [2024-11-02 14:40:07.729839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:15.900 [2024-11-02 14:40:07.729915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.900 [2024-11-02 14:40:07.794603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.900 [2024-11-02 14:40:07.878361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.900 [2024-11-02 14:40:07.878418] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.900 [2024-11-02 14:40:07.878446] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.900 [2024-11-02 14:40:07.878458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.900 [2024-11-02 14:40:07.878468] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.900 [2024-11-02 14:40:07.878503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.158 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.158 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.158 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:16.158 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.158 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.158 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.158 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:24:16.158 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYcaPlBIdj 00:24:16.158 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.415 [2024-11-02 14:40:08.259898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.415 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.676 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.935 [2024-11-02 14:40:08.805433] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.935 [2024-11-02 14:40:08.805732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.935 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:17.194 malloc0 00:24:17.194 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:17.452 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:17.712 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1412755 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1412755 /var/tmp/bdevperf.sock 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1412755 ']' 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.970 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.228 [2024-11-02 14:40:10.036820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:18.228 [2024-11-02 14:40:10.036940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412755 ] 00:24:18.228 [2024-11-02 14:40:10.098606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.228 [2024-11-02 14:40:10.186188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.485 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.486 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.486 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:18.743 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.001 [2024-11-02 14:40:10.842225] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.001 TLSTESTn1 00:24:19.001 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:19.566 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:19.566 "subsystems": [ 00:24:19.566 { 00:24:19.566 "subsystem": "keyring", 00:24:19.566 "config": [ 00:24:19.566 { 00:24:19.567 "method": "keyring_file_add_key", 00:24:19.567 "params": { 00:24:19.567 "name": "key0", 00:24:19.567 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:19.567 } 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "iobuf", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "iobuf_set_options", 00:24:19.567 "params": { 00:24:19.567 "small_pool_count": 8192, 00:24:19.567 "large_pool_count": 1024, 00:24:19.567 "small_bufsize": 8192, 00:24:19.567 "large_bufsize": 135168 00:24:19.567 } 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "sock", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "sock_set_default_impl", 00:24:19.567 "params": { 00:24:19.567 "impl_name": "posix" 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "sock_impl_set_options", 00:24:19.567 "params": { 00:24:19.567 "impl_name": "ssl", 00:24:19.567 "recv_buf_size": 4096, 00:24:19.567 "send_buf_size": 4096, 00:24:19.567 "enable_recv_pipe": true, 00:24:19.567 "enable_quickack": false, 00:24:19.567 "enable_placement_id": 0, 00:24:19.567 "enable_zerocopy_send_server": true, 00:24:19.567 "enable_zerocopy_send_client": false, 00:24:19.567 "zerocopy_threshold": 0, 00:24:19.567 "tls_version": 0, 00:24:19.567 "enable_ktls": false 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "sock_impl_set_options", 00:24:19.567 "params": { 00:24:19.567 "impl_name": "posix", 00:24:19.567 "recv_buf_size": 2097152, 00:24:19.567 "send_buf_size": 2097152, 00:24:19.567 "enable_recv_pipe": true, 00:24:19.567 "enable_quickack": false, 00:24:19.567 "enable_placement_id": 0, 00:24:19.567 "enable_zerocopy_send_server": true, 00:24:19.567 "enable_zerocopy_send_client": false, 00:24:19.567 "zerocopy_threshold": 0, 00:24:19.567 "tls_version": 0, 00:24:19.567 "enable_ktls": false 00:24:19.567 } 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "vmd", 00:24:19.567 "config": [] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "accel", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "accel_set_options", 00:24:19.567 "params": { 00:24:19.567 "small_cache_size": 128, 00:24:19.567 "large_cache_size": 16, 00:24:19.567 "task_count": 2048, 00:24:19.567 "sequence_count": 2048, 00:24:19.567 "buf_count": 2048 00:24:19.567 } 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "bdev", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "bdev_set_options", 00:24:19.567 "params": { 00:24:19.567 "bdev_io_pool_size": 65535, 00:24:19.567 "bdev_io_cache_size": 256, 00:24:19.567 "bdev_auto_examine": true, 00:24:19.567 "iobuf_small_cache_size": 128, 00:24:19.567 "iobuf_large_cache_size": 16 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_raid_set_options", 00:24:19.567 "params": { 00:24:19.567 "process_window_size_kb": 1024, 00:24:19.567 "process_max_bandwidth_mb_sec": 0 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_iscsi_set_options", 00:24:19.567 "params": { 00:24:19.567 "timeout_sec": 30 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_nvme_set_options", 00:24:19.567 "params": { 00:24:19.567 "action_on_timeout": "none", 00:24:19.567 "timeout_us": 0, 00:24:19.567 "timeout_admin_us": 0, 00:24:19.567 "keep_alive_timeout_ms": 10000, 00:24:19.567 "arbitration_burst": 0, 00:24:19.567 "low_priority_weight": 0, 00:24:19.567 "medium_priority_weight": 0, 00:24:19.567 "high_priority_weight": 0, 00:24:19.567 "nvme_adminq_poll_period_us": 10000, 00:24:19.567 "nvme_ioq_poll_period_us": 0, 00:24:19.567 "io_queue_requests": 0, 00:24:19.567 "delay_cmd_submit": true, 00:24:19.567 "transport_retry_count": 4, 00:24:19.567 "bdev_retry_count": 3, 00:24:19.567 "transport_ack_timeout": 0, 00:24:19.567 "ctrlr_loss_timeout_sec": 0, 00:24:19.567 "reconnect_delay_sec": 0, 00:24:19.567 "fast_io_fail_timeout_sec": 0, 00:24:19.567 "disable_auto_failback": false, 00:24:19.567 "generate_uuids": false, 00:24:19.567 "transport_tos": 0, 00:24:19.567 "nvme_error_stat": false, 00:24:19.567 "rdma_srq_size": 0, 00:24:19.567 "io_path_stat": false, 00:24:19.567 "allow_accel_sequence": false, 00:24:19.567 "rdma_max_cq_size": 0, 00:24:19.567 "rdma_cm_event_timeout_ms": 0, 00:24:19.567 "dhchap_digests": [ 00:24:19.567 "sha256", 00:24:19.567 "sha384", 00:24:19.567 "sha512" 00:24:19.567 ], 00:24:19.567 "dhchap_dhgroups": [ 00:24:19.567 "null", 00:24:19.567 "ffdhe2048", 00:24:19.567 "ffdhe3072", 00:24:19.567 "ffdhe4096", 00:24:19.567 "ffdhe6144", 00:24:19.567 "ffdhe8192" 00:24:19.567 ] 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_nvme_set_hotplug", 00:24:19.567 "params": { 00:24:19.567 "period_us": 100000, 00:24:19.567 "enable": false 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_malloc_create", 00:24:19.567 "params": { 00:24:19.567 "name": "malloc0", 00:24:19.567 "num_blocks": 8192, 00:24:19.567 "block_size": 4096, 00:24:19.567 "physical_block_size": 4096, 00:24:19.567 "uuid": "3b3c0cdb-38d9-442d-a534-cc94b86ea0d7", 00:24:19.567 "optimal_io_boundary": 0, 00:24:19.567 "md_size": 0, 00:24:19.567 "dif_type": 0, 00:24:19.567 "dif_is_head_of_md": false, 00:24:19.567 "dif_pi_format": 0 00:24:19.567 } 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "method": "bdev_wait_for_examine" 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "nbd", 00:24:19.567 "config": [] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "scheduler", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "framework_set_scheduler", 00:24:19.567 "params": { 00:24:19.567 "name": "static" 00:24:19.567 } 00:24:19.567 } 00:24:19.567 ] 00:24:19.567 }, 00:24:19.567 { 00:24:19.567 "subsystem": "nvmf", 00:24:19.567 "config": [ 00:24:19.567 { 00:24:19.567 "method": "nvmf_set_config", 00:24:19.567 "params": { 00:24:19.567 "discovery_filter": "match_any", 00:24:19.567 "admin_cmd_passthru": { 00:24:19.567 "identify_ctrlr": false 00:24:19.567 }, 00:24:19.567 "dhchap_digests": [ 00:24:19.567 "sha256", 00:24:19.567 "sha384", 00:24:19.567 "sha512" 00:24:19.567 ], 00:24:19.567 "dhchap_dhgroups": [ 00:24:19.567 "null", 00:24:19.567 "ffdhe2048", 00:24:19.567 "ffdhe3072", 00:24:19.567 "ffdhe4096", 00:24:19.567 "ffdhe6144", 00:24:19.568 "ffdhe8192" 00:24:19.568 ] 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_set_max_subsystems", 00:24:19.568 "params": { 00:24:19.568 "max_subsystems": 1024 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_set_crdt", 00:24:19.568 "params": { 00:24:19.568 "crdt1": 0, 00:24:19.568 "crdt2": 0, 00:24:19.568 "crdt3": 0 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_create_transport", 00:24:19.568 "params": { 00:24:19.568 "trtype": "TCP", 00:24:19.568 "max_queue_depth": 128, 00:24:19.568 "max_io_qpairs_per_ctrlr": 127, 00:24:19.568 "in_capsule_data_size": 4096, 00:24:19.568 "max_io_size": 131072, 00:24:19.568 "io_unit_size": 131072, 00:24:19.568 "max_aq_depth": 128, 00:24:19.568 "num_shared_buffers": 511, 00:24:19.568 "buf_cache_size": 4294967295, 00:24:19.568 "dif_insert_or_strip": false, 00:24:19.568 "zcopy": false, 00:24:19.568 "c2h_success": false, 00:24:19.568 "sock_priority": 0, 00:24:19.568 "abort_timeout_sec": 1, 00:24:19.568 "ack_timeout": 0, 00:24:19.568 "data_wr_pool_size": 0 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_create_subsystem", 00:24:19.568 "params": { 00:24:19.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.568 "allow_any_host": false, 00:24:19.568 "serial_number": "SPDK00000000000001", 00:24:19.568 "model_number": "SPDK bdev Controller", 00:24:19.568 "max_namespaces": 10, 00:24:19.568 "min_cntlid": 1, 00:24:19.568 "max_cntlid": 65519, 00:24:19.568 "ana_reporting": false 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_subsystem_add_host", 00:24:19.568 "params": { 00:24:19.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.568 "host": "nqn.2016-06.io.spdk:host1", 00:24:19.568 "psk": "key0" 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_subsystem_add_ns", 00:24:19.568 "params": { 00:24:19.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.568 "namespace": { 00:24:19.568 "nsid": 1, 00:24:19.568 "bdev_name": "malloc0", 00:24:19.568 "nguid": "3B3C0CDB38D9442DA534CC94B86EA0D7", 00:24:19.568 "uuid": "3b3c0cdb-38d9-442d-a534-cc94b86ea0d7", 00:24:19.568 "no_auto_visible": false 00:24:19.568 } 00:24:19.568 } 00:24:19.568 }, 00:24:19.568 { 00:24:19.568 "method": "nvmf_subsystem_add_listener", 00:24:19.568 "params": { 00:24:19.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.568 "listen_address": { 00:24:19.568 "trtype": "TCP", 00:24:19.568 "adrfam": "IPv4", 00:24:19.568 "traddr": "10.0.0.2", 00:24:19.568 "trsvcid": "4420" 00:24:19.568 }, 00:24:19.568 "secure_channel": true 00:24:19.568 } 00:24:19.568 } 00:24:19.568 ] 00:24:19.568 } 00:24:19.568 ] 00:24:19.568 }' 00:24:19.568 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:19.826 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:19.826 "subsystems": [ 00:24:19.826 { 00:24:19.826 "subsystem": "keyring", 00:24:19.826 "config": [ 00:24:19.826 { 00:24:19.826 "method": "keyring_file_add_key", 00:24:19.826 "params": { 00:24:19.826 "name": "key0", 00:24:19.826 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:19.826 } 00:24:19.826 } 00:24:19.826 ] 00:24:19.826 }, 00:24:19.826 { 00:24:19.826 "subsystem": "iobuf", 00:24:19.826 "config": [ 00:24:19.826 { 00:24:19.826 "method": "iobuf_set_options", 00:24:19.826 "params": { 00:24:19.826 "small_pool_count": 8192, 00:24:19.826 "large_pool_count": 1024, 00:24:19.826 "small_bufsize": 8192, 00:24:19.826 "large_bufsize": 135168 00:24:19.826 } 00:24:19.826 } 00:24:19.826 ] 00:24:19.826 }, 00:24:19.826 { 00:24:19.826 "subsystem": "sock", 00:24:19.826 "config": [ 00:24:19.826 { 00:24:19.826 "method": "sock_set_default_impl", 00:24:19.826 "params": { 00:24:19.826 "impl_name": "posix" 00:24:19.826 } 00:24:19.826 }, 00:24:19.826 { 00:24:19.826 "method": "sock_impl_set_options", 00:24:19.827 "params": { 00:24:19.827 "impl_name": "ssl", 00:24:19.827 "recv_buf_size": 4096, 00:24:19.827 "send_buf_size": 4096, 00:24:19.827 "enable_recv_pipe": true, 00:24:19.827 "enable_quickack": false, 00:24:19.827 "enable_placement_id": 0, 00:24:19.827 "enable_zerocopy_send_server": true, 00:24:19.827 "enable_zerocopy_send_client": false, 00:24:19.827 "zerocopy_threshold": 0, 00:24:19.827 "tls_version": 0, 00:24:19.827 "enable_ktls": false 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "sock_impl_set_options", 00:24:19.827 "params": { 00:24:19.827 "impl_name": "posix", 00:24:19.827 "recv_buf_size": 2097152, 00:24:19.827 "send_buf_size": 2097152, 00:24:19.827 "enable_recv_pipe": true, 00:24:19.827 "enable_quickack": false, 00:24:19.827 "enable_placement_id": 0, 00:24:19.827 "enable_zerocopy_send_server": true, 00:24:19.827 "enable_zerocopy_send_client": false, 00:24:19.827 "zerocopy_threshold": 0, 00:24:19.827 "tls_version": 0, 00:24:19.827 "enable_ktls": false 00:24:19.827 } 00:24:19.827 } 00:24:19.827 ] 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "subsystem": "vmd", 00:24:19.827 "config": [] 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "subsystem": "accel", 00:24:19.827 "config": [ 00:24:19.827 { 00:24:19.827 "method": "accel_set_options", 00:24:19.827 "params": { 00:24:19.827 "small_cache_size": 128, 00:24:19.827 "large_cache_size": 16, 00:24:19.827 "task_count": 2048, 00:24:19.827 "sequence_count": 2048, 00:24:19.827 "buf_count": 2048 00:24:19.827 } 00:24:19.827 } 00:24:19.827 ] 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "subsystem": "bdev", 00:24:19.827 "config": [ 00:24:19.827 { 00:24:19.827 "method": "bdev_set_options", 00:24:19.827 "params": { 00:24:19.827 "bdev_io_pool_size": 65535, 00:24:19.827 "bdev_io_cache_size": 256, 00:24:19.827 "bdev_auto_examine": true, 00:24:19.827 "iobuf_small_cache_size": 128, 00:24:19.827 "iobuf_large_cache_size": 16 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_raid_set_options", 00:24:19.827 "params": { 00:24:19.827 "process_window_size_kb": 1024, 00:24:19.827 "process_max_bandwidth_mb_sec": 0 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_iscsi_set_options", 00:24:19.827 "params": { 00:24:19.827 "timeout_sec": 30 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_nvme_set_options", 00:24:19.827 "params": { 00:24:19.827 "action_on_timeout": "none", 00:24:19.827 "timeout_us": 0, 00:24:19.827 "timeout_admin_us": 0, 00:24:19.827 "keep_alive_timeout_ms": 10000, 00:24:19.827 "arbitration_burst": 0, 00:24:19.827 "low_priority_weight": 0, 00:24:19.827 "medium_priority_weight": 0, 00:24:19.827 "high_priority_weight": 0, 00:24:19.827 "nvme_adminq_poll_period_us": 10000, 00:24:19.827 "nvme_ioq_poll_period_us": 0, 00:24:19.827 "io_queue_requests": 512, 00:24:19.827 "delay_cmd_submit": true, 00:24:19.827 "transport_retry_count": 4, 00:24:19.827 "bdev_retry_count": 3, 00:24:19.827 "transport_ack_timeout": 0, 00:24:19.827 "ctrlr_loss_timeout_sec": 0, 00:24:19.827 "reconnect_delay_sec": 0, 00:24:19.827 "fast_io_fail_timeout_sec": 0, 00:24:19.827 "disable_auto_failback": false, 00:24:19.827 "generate_uuids": false, 00:24:19.827 "transport_tos": 0, 00:24:19.827 "nvme_error_stat": false, 00:24:19.827 "rdma_srq_size": 0, 00:24:19.827 "io_path_stat": false, 00:24:19.827 "allow_accel_sequence": false, 00:24:19.827 "rdma_max_cq_size": 0, 00:24:19.827 "rdma_cm_event_timeout_ms": 0, 00:24:19.827 "dhchap_digests": [ 00:24:19.827 "sha256", 00:24:19.827 "sha384", 00:24:19.827 "sha512" 00:24:19.827 ], 00:24:19.827 "dhchap_dhgroups": [ 00:24:19.827 "null", 00:24:19.827 "ffdhe2048", 00:24:19.827 "ffdhe3072", 00:24:19.827 "ffdhe4096", 00:24:19.827 "ffdhe6144", 00:24:19.827 "ffdhe8192" 00:24:19.827 ] 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_nvme_attach_controller", 00:24:19.827 "params": { 00:24:19.827 "name": "TLSTEST", 00:24:19.827 "trtype": "TCP", 00:24:19.827 "adrfam": "IPv4", 00:24:19.827 "traddr": "10.0.0.2", 00:24:19.827 "trsvcid": "4420", 00:24:19.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.827 "prchk_reftag": false, 00:24:19.827 "prchk_guard": false, 00:24:19.827 "ctrlr_loss_timeout_sec": 0, 00:24:19.827 "reconnect_delay_sec": 0, 00:24:19.827 "fast_io_fail_timeout_sec": 0, 00:24:19.827 "psk": "key0", 00:24:19.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.827 "hdgst": false, 00:24:19.827 "ddgst": false 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_nvme_set_hotplug", 00:24:19.827 "params": { 00:24:19.827 "period_us": 100000, 00:24:19.827 "enable": false 00:24:19.827 } 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "method": "bdev_wait_for_examine" 00:24:19.827 } 00:24:19.827 ] 00:24:19.827 }, 00:24:19.827 { 00:24:19.827 "subsystem": "nbd", 00:24:19.827 "config": [] 00:24:19.827 } 00:24:19.827 ] 00:24:19.827 }' 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1412755 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1412755 ']' 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1412755 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412755 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412755' 00:24:19.827 killing process with pid 1412755 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1412755 00:24:19.827 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.827 00:24:19.827 Latency(us) 00:24:19.827 [2024-11-02T13:40:11.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.827 [2024-11-02T13:40:11.882Z] =================================================================================================================== 00:24:19.827 [2024-11-02T13:40:11.882Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.827 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1412755 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1412466 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1412466 ']' 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1412466 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412466 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412466' 00:24:20.088 killing process with pid 1412466 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1412466 00:24:20.088 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1412466 00:24:20.347 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:20.347 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:20.347 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.347 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:20.347 "subsystems": [ 00:24:20.347 { 00:24:20.347 "subsystem": "keyring", 00:24:20.347 "config": [ 00:24:20.347 { 00:24:20.347 "method": "keyring_file_add_key", 00:24:20.347 "params": { 00:24:20.347 "name": "key0", 00:24:20.347 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:20.347 } 00:24:20.347 } 00:24:20.347 ] 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "subsystem": "iobuf", 00:24:20.347 "config": [ 00:24:20.347 { 00:24:20.347 "method": "iobuf_set_options", 00:24:20.347 "params": { 00:24:20.347 "small_pool_count": 8192, 00:24:20.347 "large_pool_count": 1024, 00:24:20.347 "small_bufsize": 8192, 00:24:20.347 "large_bufsize": 135168 00:24:20.347 } 00:24:20.347 } 00:24:20.347 ] 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "subsystem": "sock", 00:24:20.347 "config": [ 00:24:20.347 { 00:24:20.347 "method": "sock_set_default_impl", 00:24:20.347 "params": { 00:24:20.347 "impl_name": "posix" 00:24:20.347 } 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "method": "sock_impl_set_options", 00:24:20.347 "params": { 00:24:20.347 "impl_name": "ssl", 00:24:20.347 "recv_buf_size": 4096, 00:24:20.347 "send_buf_size": 4096, 00:24:20.347 "enable_recv_pipe": true, 00:24:20.347 "enable_quickack": false, 00:24:20.347 "enable_placement_id": 0, 00:24:20.347 "enable_zerocopy_send_server": true, 00:24:20.347 "enable_zerocopy_send_client": false, 00:24:20.347 "zerocopy_threshold": 0, 00:24:20.347 "tls_version": 0, 00:24:20.347 "enable_ktls": false 00:24:20.347 } 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "method": "sock_impl_set_options", 00:24:20.347 "params": { 00:24:20.347 "impl_name": "posix", 00:24:20.347 "recv_buf_size": 2097152, 00:24:20.347 "send_buf_size": 2097152, 00:24:20.347 "enable_recv_pipe": true, 00:24:20.347 "enable_quickack": false, 00:24:20.347 "enable_placement_id": 0, 00:24:20.347 "enable_zerocopy_send_server": true, 00:24:20.347 "enable_zerocopy_send_client": false, 00:24:20.347 "zerocopy_threshold": 0, 00:24:20.347 "tls_version": 0, 00:24:20.347 "enable_ktls": false 00:24:20.347 } 00:24:20.347 } 00:24:20.347 ] 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "subsystem": "vmd", 00:24:20.347 "config": [] 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "subsystem": "accel", 00:24:20.347 "config": [ 00:24:20.347 { 00:24:20.347 "method": "accel_set_options", 00:24:20.347 "params": { 00:24:20.347 "small_cache_size": 128, 00:24:20.347 "large_cache_size": 16, 00:24:20.347 "task_count": 2048, 00:24:20.347 "sequence_count": 2048, 00:24:20.347 "buf_count": 2048 00:24:20.347 } 00:24:20.347 } 00:24:20.347 ] 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "subsystem": "bdev", 00:24:20.347 "config": [ 00:24:20.347 { 00:24:20.347 "method": "bdev_set_options", 00:24:20.347 "params": { 00:24:20.347 "bdev_io_pool_size": 65535, 00:24:20.347 "bdev_io_cache_size": 256, 00:24:20.347 "bdev_auto_examine": true, 00:24:20.347 "iobuf_small_cache_size": 128, 00:24:20.347 "iobuf_large_cache_size": 16 00:24:20.347 } 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "method": "bdev_raid_set_options", 00:24:20.347 "params": { 00:24:20.347 "process_window_size_kb": 1024, 00:24:20.347 "process_max_bandwidth_mb_sec": 0 00:24:20.347 } 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "method": "bdev_iscsi_set_options", 00:24:20.347 "params": { 00:24:20.347 "timeout_sec": 30 00:24:20.347 } 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "method": "bdev_nvme_set_options", 00:24:20.347 "params": { 00:24:20.347 "action_on_timeout": "none", 00:24:20.347 "timeout_us": 0, 00:24:20.347 "timeout_admin_us": 0, 00:24:20.347 "keep_alive_timeout_ms": 10000, 00:24:20.347 "arbitration_burst": 0, 00:24:20.347 "low_priority_weight": 0, 00:24:20.347 "medium_priority_weight": 0, 00:24:20.347 "high_priority_weight": 0, 00:24:20.347 "nvme_adminq_poll_period_us": 10000, 00:24:20.347 "nvme_ioq_poll_period_us": 0, 00:24:20.347 "io_queue_requests": 0, 00:24:20.347 "delay_cmd_submit": true, 00:24:20.347 "transport_retry_count": 4, 00:24:20.347 "bdev_retry_count": 3, 00:24:20.347 "transport_ack_timeout": 0, 00:24:20.347 "ctrlr_loss_timeout_sec": 0, 00:24:20.347 "reconnect_delay_sec": 0, 00:24:20.347 "fast_io_fail_timeout_sec": 0, 00:24:20.348 "disable_auto_failback": false, 00:24:20.348 "generate_uuids": false, 00:24:20.348 "transport_tos": 0, 00:24:20.348 "nvme_error_stat": false, 00:24:20.348 "rdma_srq_size": 0, 00:24:20.348 "io_path_stat": false, 00:24:20.348 "allow_accel_sequence": false, 00:24:20.348 "rdma_max_cq_size": 0, 00:24:20.348 "rdma_cm_event_timeout_ms": 0, 00:24:20.348 "dhchap_digests": [ 00:24:20.348 "sha256", 00:24:20.348 "sha384", 00:24:20.348 "sha512" 00:24:20.348 ], 00:24:20.348 "dhchap_dhgroups": [ 00:24:20.348 "null", 00:24:20.348 "ffdhe2048", 00:24:20.348 "ffdhe3072", 00:24:20.348 "ffdhe4096", 00:24:20.348 "ffdhe6144", 00:24:20.348 "ffdhe8192" 00:24:20.348 ] 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "bdev_nvme_set_hotplug", 00:24:20.348 "params": { 00:24:20.348 "period_us": 100000, 00:24:20.348 "enable": false 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "bdev_malloc_create", 00:24:20.348 "params": { 00:24:20.348 "name": "malloc0", 00:24:20.348 "num_blocks": 8192, 00:24:20.348 "block_size": 4096, 00:24:20.348 "physical_block_size": 4096, 00:24:20.348 "uuid": "3b3c0cdb-38d9-442d-a534-cc94b86ea0d7", 00:24:20.348 "optimal_io_boundary": 0, 00:24:20.348 "md_size": 0, 00:24:20.348 "dif_type": 0, 00:24:20.348 "dif_is_head_of_md": false, 00:24:20.348 "dif_pi_format": 0 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "bdev_wait_for_examine" 00:24:20.348 } 00:24:20.348 ] 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "subsystem": "nbd", 00:24:20.348 "config": [] 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "subsystem": "scheduler", 00:24:20.348 "config": [ 00:24:20.348 { 00:24:20.348 "method": "framework_set_scheduler", 00:24:20.348 "params": { 00:24:20.348 "name": "static" 00:24:20.348 } 00:24:20.348 } 00:24:20.348 ] 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "subsystem": "nvmf", 00:24:20.348 "config": [ 00:24:20.348 { 00:24:20.348 "method": "nvmf_set_config", 00:24:20.348 "params": { 00:24:20.348 "discovery_filter": "match_any", 00:24:20.348 "admin_cmd_passthru": { 00:24:20.348 "identify_ctrlr": false 00:24:20.348 }, 00:24:20.348 "dhchap_digests": [ 00:24:20.348 "sha256", 00:24:20.348 "sha384", 00:24:20.348 "sha512" 00:24:20.348 ], 00:24:20.348 "dhchap_dhgroups": [ 00:24:20.348 "null", 00:24:20.348 "ffdhe2048", 00:24:20.348 "ffdhe3072", 00:24:20.348 "ffdhe4096", 00:24:20.348 "ffdhe6144", 00:24:20.348 "ffdhe8192" 00:24:20.348 ] 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_set_max_subsystems", 00:24:20.348 "params": { 00:24:20.348 "max_subsystems": 1024 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_set_crdt", 00:24:20.348 "params": { 00:24:20.348 "crdt1": 0, 00:24:20.348 "crdt2": 0, 00:24:20.348 "crdt3": 0 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_create_transport", 00:24:20.348 "params": { 00:24:20.348 "trtype": "TCP", 00:24:20.348 "max_queue_depth": 128, 00:24:20.348 "max_io_qpairs_per_ctrlr": 127, 00:24:20.348 "in_capsule_data_size": 4096, 00:24:20.348 "max_io_size": 131072, 00:24:20.348 "io_unit_size": 131072, 00:24:20.348 "max_aq_depth": 128, 00:24:20.348 "num_shared_buffers": 511, 00:24:20.348 "buf_cache_size": 4294967295, 00:24:20.348 "dif_insert_or_strip": false, 00:24:20.348 "zcopy": false, 00:24:20.348 "c2h_success": false, 00:24:20.348 "sock_priority": 0, 00:24:20.348 "abort_timeout_sec": 1, 00:24:20.348 "ack_timeout": 0, 00:24:20.348 "data_wr_pool_size": 0 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_create_subsystem", 00:24:20.348 "params": { 00:24:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.348 "allow_any_host": false, 00:24:20.348 "serial_number": "SPDK00000000000001", 00:24:20.348 "model_number": "SPDK bdev Controller", 00:24:20.348 "max_namespaces": 10, 00:24:20.348 "min_cntlid": 1, 00:24:20.348 "max_cntlid": 65519, 00:24:20.348 "ana_reporting": false 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_subsystem_add_host", 00:24:20.348 "params": { 00:24:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.348 "host": "nqn.2016-06.io.spdk:host1", 00:24:20.348 "psk": "key0" 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_subsystem_add_ns", 00:24:20.348 "params": { 00:24:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.348 "namespace": { 00:24:20.348 "nsid": 1, 00:24:20.348 "bdev_name": "malloc0", 00:24:20.348 "nguid": "3B3C0CDB38D9442DA534CC94B86EA0D7", 00:24:20.348 "uuid": "3b3c0cdb-38d9-442d-a534-cc94b86ea0d7", 00:24:20.348 "no_auto_visible": false 00:24:20.348 } 00:24:20.348 } 00:24:20.348 }, 00:24:20.348 { 00:24:20.348 "method": "nvmf_subsystem_add_listener", 00:24:20.348 "params": { 00:24:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.348 "listen_address": { 00:24:20.348 "trtype": "TCP", 00:24:20.348 "adrfam": "IPv4", 00:24:20.348 "traddr": "10.0.0.2", 00:24:20.348 "trsvcid": "4420" 00:24:20.348 }, 00:24:20.348 "secure_channel": true 00:24:20.348 } 00:24:20.348 } 00:24:20.348 ] 00:24:20.348 } 00:24:20.348 ] 00:24:20.348 }' 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1413035 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1413035 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1413035 ']' 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.348 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.348 [2024-11-02 14:40:12.263890] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:20.348 [2024-11-02 14:40:12.263986] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.348 [2024-11-02 14:40:12.333810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.608 [2024-11-02 14:40:12.421099] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.608 [2024-11-02 14:40:12.421164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.608 [2024-11-02 14:40:12.421180] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.608 [2024-11-02 14:40:12.421193] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.608 [2024-11-02 14:40:12.421204] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.608 [2024-11-02 14:40:12.421311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.865 [2024-11-02 14:40:12.685064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.865 [2024-11-02 14:40:12.717074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.865 [2024-11-02 14:40:12.717360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1413186 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1413186 /var/tmp/bdevperf.sock 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1413186 ']' 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.432 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:21.432 "subsystems": [ 00:24:21.432 { 00:24:21.432 "subsystem": "keyring", 00:24:21.432 "config": [ 00:24:21.432 { 00:24:21.432 "method": "keyring_file_add_key", 00:24:21.432 "params": { 00:24:21.432 "name": "key0", 00:24:21.432 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:21.432 } 00:24:21.432 } 00:24:21.432 ] 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "subsystem": "iobuf", 00:24:21.432 "config": [ 00:24:21.432 { 00:24:21.432 "method": "iobuf_set_options", 00:24:21.432 "params": { 00:24:21.432 "small_pool_count": 8192, 00:24:21.432 "large_pool_count": 1024, 00:24:21.432 "small_bufsize": 8192, 00:24:21.432 "large_bufsize": 135168 00:24:21.432 } 00:24:21.432 } 00:24:21.432 ] 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "subsystem": "sock", 00:24:21.432 "config": [ 00:24:21.432 { 00:24:21.432 "method": "sock_set_default_impl", 00:24:21.432 "params": { 00:24:21.432 "impl_name": "posix" 00:24:21.432 } 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "method": "sock_impl_set_options", 00:24:21.432 "params": { 00:24:21.432 "impl_name": "ssl", 00:24:21.432 "recv_buf_size": 4096, 00:24:21.432 "send_buf_size": 4096, 00:24:21.432 "enable_recv_pipe": true, 00:24:21.432 "enable_quickack": false, 00:24:21.432 "enable_placement_id": 0, 00:24:21.432 "enable_zerocopy_send_server": true, 00:24:21.432 "enable_zerocopy_send_client": false, 00:24:21.432 "zerocopy_threshold": 0, 00:24:21.432 "tls_version": 0, 00:24:21.432 "enable_ktls": false 00:24:21.432 } 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "method": "sock_impl_set_options", 00:24:21.432 "params": { 00:24:21.432 "impl_name": "posix", 00:24:21.432 "recv_buf_size": 2097152, 00:24:21.432 "send_buf_size": 2097152, 00:24:21.432 "enable_recv_pipe": true, 00:24:21.432 "enable_quickack": false, 00:24:21.432 "enable_placement_id": 0, 00:24:21.432 "enable_zerocopy_send_server": true, 00:24:21.432 "enable_zerocopy_send_client": false, 00:24:21.432 "zerocopy_threshold": 0, 00:24:21.432 "tls_version": 0, 00:24:21.432 "enable_ktls": false 00:24:21.432 } 00:24:21.432 } 00:24:21.432 ] 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "subsystem": "vmd", 00:24:21.432 "config": [] 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "subsystem": "accel", 00:24:21.432 "config": [ 00:24:21.432 { 00:24:21.432 "method": "accel_set_options", 00:24:21.432 "params": { 00:24:21.432 "small_cache_size": 128, 00:24:21.432 "large_cache_size": 16, 00:24:21.432 "task_count": 2048, 00:24:21.432 "sequence_count": 2048, 00:24:21.432 "buf_count": 2048 00:24:21.432 } 00:24:21.432 } 00:24:21.432 ] 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "subsystem": "bdev", 00:24:21.432 "config": [ 00:24:21.432 { 00:24:21.432 "method": "bdev_set_options", 00:24:21.432 "params": { 00:24:21.432 "bdev_io_pool_size": 65535, 00:24:21.432 "bdev_io_cache_size": 256, 00:24:21.432 "bdev_auto_examine": true, 00:24:21.432 "iobuf_small_cache_size": 128, 00:24:21.432 "iobuf_large_cache_size": 16 00:24:21.432 } 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "method": "bdev_raid_set_options", 00:24:21.432 "params": { 00:24:21.432 "process_window_size_kb": 1024, 00:24:21.432 "process_max_bandwidth_mb_sec": 0 00:24:21.432 } 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "method": "bdev_iscsi_set_options", 00:24:21.432 "params": { 00:24:21.432 "timeout_sec": 30 00:24:21.432 } 00:24:21.432 }, 00:24:21.432 { 00:24:21.432 "method": "bdev_nvme_set_options", 00:24:21.432 "params": { 00:24:21.432 "action_on_timeout": "none", 00:24:21.432 "timeout_us": 0, 00:24:21.432 "timeout_admin_us": 0, 00:24:21.432 "keep_alive_timeout_ms": 10000, 00:24:21.432 "arbitration_burst": 0, 00:24:21.432 "low_priority_weight": 0, 00:24:21.432 "medium_priority_weight": 0, 00:24:21.432 "high_priority_weight": 0, 00:24:21.432 "nvme_adminq_poll_period_us": 10000, 00:24:21.432 "nvme_ioq_poll_period_us": 0, 00:24:21.432 "io_queue_requests": 512, 00:24:21.432 "delay_cmd_submit": true, 00:24:21.432 "transport_retry_count": 4, 00:24:21.432 "bdev_retry_count": 3, 00:24:21.432 "transport_ack_timeout": 0, 00:24:21.433 "ctrlr_loss_timeout_sec": 0, 00:24:21.433 "reconnect_delay_sec": 0, 00:24:21.433 "fast_io_fail_timeout_sec": 0, 00:24:21.433 "disable_auto_failback": false, 00:24:21.433 "generate_uuids": false, 00:24:21.433 "transport_tos": 0, 00:24:21.433 "nvme_error_stat": false, 00:24:21.433 "rdma_srq_size": 0, 00:24:21.433 "io_path_stat": false, 00:24:21.433 "allow_accel_sequence": false, 00:24:21.433 "rdma_max_cq_size": 0, 00:24:21.433 "rdma_cm_event_timeout_ms": 0, 00:24:21.433 "dhchap_digests": [ 00:24:21.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.433 "sha256", 00:24:21.433 "sha384", 00:24:21.433 "sha512" 00:24:21.433 ], 00:24:21.433 "dhchap_dhgroups": [ 00:24:21.433 "null", 00:24:21.433 "ffdhe2048", 00:24:21.433 "ffdhe3072", 00:24:21.433 "ffdhe4096", 00:24:21.433 "ffdhe6144", 00:24:21.433 "ffdhe8192" 00:24:21.433 ] 00:24:21.433 } 00:24:21.433 }, 00:24:21.433 { 00:24:21.433 "method": "bdev_nvme_attach_controller", 00:24:21.433 "params": { 00:24:21.433 "name": "TLSTEST", 00:24:21.433 "trtype": "TCP", 00:24:21.433 "adrfam": "IPv4", 00:24:21.433 "traddr": "10.0.0.2", 00:24:21.433 "trsvcid": "4420", 00:24:21.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.433 "prchk_reftag": false, 00:24:21.433 "prchk_guard": false, 00:24:21.433 "ctrlr_loss_timeout_sec": 0, 00:24:21.433 "reconnect_delay_sec": 0, 00:24:21.433 "fast_io_fail_timeout_sec": 0, 00:24:21.433 "psk": "key0", 00:24:21.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.433 "hdgst": false, 00:24:21.433 "ddgst": false 00:24:21.433 } 00:24:21.433 }, 00:24:21.433 { 00:24:21.433 "method": "bdev_nvme_set_hotplug", 00:24:21.433 "params": { 00:24:21.433 "period_us": 100000, 00:24:21.433 "enable": false 00:24:21.433 } 00:24:21.433 }, 00:24:21.433 { 00:24:21.433 "method": "bdev_wait_for_examine" 00:24:21.433 } 00:24:21.433 ] 00:24:21.433 }, 00:24:21.433 { 00:24:21.433 "subsystem": "nbd", 00:24:21.433 "config": [] 00:24:21.433 } 00:24:21.433 ] 00:24:21.433 }' 00:24:21.433 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.433 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.433 [2024-11-02 14:40:13.313941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:21.433 [2024-11-02 14:40:13.314025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413186 ] 00:24:21.433 [2024-11-02 14:40:13.372830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.433 [2024-11-02 14:40:13.458416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.694 [2024-11-02 14:40:13.639576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.262 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.262 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:22.262 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:22.521 Running I/O for 10 seconds... 00:24:24.401 3110.00 IOPS, 12.15 MiB/s [2024-11-02T13:40:17.833Z] 3176.50 IOPS, 12.41 MiB/s [2024-11-02T13:40:18.769Z] 3212.33 IOPS, 12.55 MiB/s [2024-11-02T13:40:19.706Z] 3229.25 IOPS, 12.61 MiB/s [2024-11-02T13:40:20.644Z] 3251.00 IOPS, 12.70 MiB/s [2024-11-02T13:40:21.583Z] 3253.83 IOPS, 12.71 MiB/s [2024-11-02T13:40:22.517Z] 3266.86 IOPS, 12.76 MiB/s [2024-11-02T13:40:23.455Z] 3265.00 IOPS, 12.75 MiB/s [2024-11-02T13:40:24.830Z] 3260.78 IOPS, 12.74 MiB/s [2024-11-02T13:40:24.830Z] 3266.80 IOPS, 12.76 MiB/s 00:24:32.775 Latency(us) 00:24:32.775 [2024-11-02T13:40:24.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:32.775 Verification LBA range: start 0x0 length 0x2000 00:24:32.775 TLSTESTn1 : 10.04 3267.26 12.76 0.00 0.00 39084.27 5873.97 56312.41 00:24:32.775 [2024-11-02T13:40:24.830Z] =================================================================================================================== 00:24:32.775 [2024-11-02T13:40:24.830Z] Total : 3267.26 12.76 0.00 0.00 39084.27 5873.97 56312.41 00:24:32.775 { 00:24:32.775 "results": [ 00:24:32.775 { 00:24:32.775 "job": "TLSTESTn1", 00:24:32.775 "core_mask": "0x4", 00:24:32.775 "workload": "verify", 00:24:32.775 "status": "finished", 00:24:32.775 "verify_range": { 00:24:32.775 "start": 0, 00:24:32.775 "length": 8192 00:24:32.775 }, 00:24:32.775 "queue_depth": 128, 00:24:32.775 "io_size": 4096, 00:24:32.775 "runtime": 10.037449, 00:24:32.775 "iops": 3267.2644214680445, 00:24:32.775 "mibps": 12.762751646359549, 00:24:32.775 "io_failed": 0, 00:24:32.775 "io_timeout": 0, 00:24:32.775 "avg_latency_us": 39084.266667750846, 00:24:32.775 "min_latency_us": 5873.967407407407, 00:24:32.775 "max_latency_us": 56312.414814814816 00:24:32.775 } 00:24:32.775 ], 00:24:32.775 "core_count": 1 00:24:32.775 } 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1413186 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1413186 ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1413186 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413186 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413186' 00:24:32.775 killing process with pid 1413186 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1413186 00:24:32.775 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.775 00:24:32.775 Latency(us) 00:24:32.775 [2024-11-02T13:40:24.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.775 [2024-11-02T13:40:24.830Z] =================================================================================================================== 00:24:32.775 [2024-11-02T13:40:24.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1413186 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1413035 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1413035 ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1413035 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413035 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413035' 00:24:32.775 killing process with pid 1413035 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1413035 00:24:32.775 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1413035 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1414528 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1414528 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1414528 ']' 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.034 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.294 [2024-11-02 14:40:25.119350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:33.294 [2024-11-02 14:40:25.119438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.294 [2024-11-02 14:40:25.196806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.294 [2024-11-02 14:40:25.292641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.294 [2024-11-02 14:40:25.292714] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.294 [2024-11-02 14:40:25.292732] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.294 [2024-11-02 14:40:25.292746] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.294 [2024-11-02 14:40:25.292757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.294 [2024-11-02 14:40:25.292790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.wYcaPlBIdj 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYcaPlBIdj 00:24:33.553 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.810 [2024-11-02 14:40:25.741175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.810 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:34.068 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:34.394 [2024-11-02 14:40:26.274591] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:34.394 [2024-11-02 14:40:26.274854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.394 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:34.670 malloc0 00:24:34.670 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:34.928 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:35.185 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1414881 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1414881 /var/tmp/bdevperf.sock 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1414881 ']' 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.443 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.443 [2024-11-02 14:40:27.456942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:35.443 [2024-11-02 14:40:27.457045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414881 ] 00:24:35.701 [2024-11-02 14:40:27.516314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.701 [2024-11-02 14:40:27.602193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.701 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.701 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.701 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:35.959 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:36.218 [2024-11-02 14:40:28.218221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.476 nvme0n1 00:24:36.476 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.476 Running I/O for 1 seconds... 00:24:37.412 3074.00 IOPS, 12.01 MiB/s 00:24:37.412 Latency(us) 00:24:37.412 [2024-11-02T13:40:29.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:37.412 Verification LBA range: start 0x0 length 0x2000 00:24:37.412 nvme0n1 : 1.04 3084.52 12.05 0.00 0.00 40801.17 9029.40 72623.60 00:24:37.412 [2024-11-02T13:40:29.467Z] =================================================================================================================== 00:24:37.412 [2024-11-02T13:40:29.467Z] Total : 3084.52 12.05 0.00 0.00 40801.17 9029.40 72623.60 00:24:37.412 { 00:24:37.412 "results": [ 00:24:37.412 { 00:24:37.412 "job": "nvme0n1", 00:24:37.412 "core_mask": "0x2", 00:24:37.412 "workload": "verify", 00:24:37.412 "status": "finished", 00:24:37.412 "verify_range": { 00:24:37.412 "start": 0, 00:24:37.412 "length": 8192 00:24:37.412 }, 00:24:37.413 "queue_depth": 128, 00:24:37.413 "io_size": 4096, 00:24:37.413 "runtime": 1.038088, 00:24:37.413 "iops": 3084.5169195675126, 00:24:37.413 "mibps": 12.048894217060596, 00:24:37.413 "io_failed": 0, 00:24:37.413 "io_timeout": 0, 00:24:37.413 "avg_latency_us": 40801.17446595877, 00:24:37.413 "min_latency_us": 9029.404444444444, 00:24:37.413 "max_latency_us": 72623.59703703703 00:24:37.413 } 00:24:37.413 ], 00:24:37.413 "core_count": 1 00:24:37.413 } 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1414881 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1414881 ']' 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1414881 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414881 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414881' 00:24:37.675 killing process with pid 1414881 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1414881 00:24:37.675 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.675 00:24:37.675 Latency(us) 00:24:37.675 [2024-11-02T13:40:29.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.675 [2024-11-02T13:40:29.730Z] =================================================================================================================== 00:24:37.675 [2024-11-02T13:40:29.730Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.675 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1414881 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1414528 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1414528 ']' 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1414528 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414528 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414528' 00:24:37.936 killing process with pid 1414528 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1414528 00:24:37.936 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1414528 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1415219 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1415219 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1415219 ']' 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.195 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.195 [2024-11-02 14:40:30.107204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:38.195 [2024-11-02 14:40:30.107313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.195 [2024-11-02 14:40:30.185291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.454 [2024-11-02 14:40:30.276808] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.454 [2024-11-02 14:40:30.276872] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.454 [2024-11-02 14:40:30.276888] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.454 [2024-11-02 14:40:30.276901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.454 [2024-11-02 14:40:30.276914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.454 [2024-11-02 14:40:30.276946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 [2024-11-02 14:40:30.431488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.454 malloc0 00:24:38.454 [2024-11-02 14:40:30.476670] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.454 [2024-11-02 14:40:30.476958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1415244 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1415244 /var/tmp/bdevperf.sock 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1415244 ']' 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.454 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.712 [2024-11-02 14:40:30.552267] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:38.712 [2024-11-02 14:40:30.552351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415244 ] 00:24:38.712 [2024-11-02 14:40:30.615602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.712 [2024-11-02 14:40:30.707761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.971 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.971 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:38.971 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYcaPlBIdj 00:24:39.228 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:39.486 [2024-11-02 14:40:31.361648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.486 nvme0n1 00:24:39.486 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.744 Running I/O for 1 seconds... 00:24:40.697 3041.00 IOPS, 11.88 MiB/s 00:24:40.697 Latency(us) 00:24:40.697 [2024-11-02T13:40:32.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:40.697 Verification LBA range: start 0x0 length 0x2000 00:24:40.697 nvme0n1 : 1.04 3046.65 11.90 0.00 0.00 41265.64 8835.22 60584.39 00:24:40.697 [2024-11-02T13:40:32.752Z] =================================================================================================================== 00:24:40.697 [2024-11-02T13:40:32.752Z] Total : 3046.65 11.90 0.00 0.00 41265.64 8835.22 60584.39 00:24:40.697 { 00:24:40.697 "results": [ 00:24:40.697 { 00:24:40.697 "job": "nvme0n1", 00:24:40.697 "core_mask": "0x2", 00:24:40.697 "workload": "verify", 00:24:40.697 "status": "finished", 00:24:40.697 "verify_range": { 00:24:40.697 "start": 0, 00:24:40.697 "length": 8192 00:24:40.697 }, 00:24:40.697 "queue_depth": 128, 00:24:40.697 "io_size": 4096, 00:24:40.697 "runtime": 1.040158, 00:24:40.697 "iops": 3046.6525277890473, 00:24:40.697 "mibps": 11.900986436675966, 00:24:40.697 "io_failed": 0, 00:24:40.697 "io_timeout": 0, 00:24:40.697 "avg_latency_us": 41265.63628858268, 00:24:40.697 "min_latency_us": 8835.223703703703, 00:24:40.697 "max_latency_us": 60584.39111111111 00:24:40.697 } 00:24:40.697 ], 00:24:40.697 "core_count": 1 00:24:40.697 } 00:24:40.697 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:40.697 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.697 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.697 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.697 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:40.697 "subsystems": [ 00:24:40.697 { 00:24:40.697 "subsystem": "keyring", 00:24:40.697 "config": [ 00:24:40.697 { 00:24:40.697 "method": "keyring_file_add_key", 00:24:40.697 "params": { 00:24:40.697 "name": "key0", 00:24:40.697 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:40.697 } 00:24:40.697 } 00:24:40.697 ] 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "subsystem": "iobuf", 00:24:40.697 "config": [ 00:24:40.697 { 00:24:40.697 "method": "iobuf_set_options", 00:24:40.697 "params": { 00:24:40.697 "small_pool_count": 8192, 00:24:40.697 "large_pool_count": 1024, 00:24:40.697 "small_bufsize": 8192, 00:24:40.697 "large_bufsize": 135168 00:24:40.697 } 00:24:40.697 } 00:24:40.697 ] 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "subsystem": "sock", 00:24:40.697 "config": [ 00:24:40.697 { 00:24:40.697 "method": "sock_set_default_impl", 00:24:40.697 "params": { 00:24:40.697 "impl_name": "posix" 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "sock_impl_set_options", 00:24:40.697 "params": { 00:24:40.697 "impl_name": "ssl", 00:24:40.697 "recv_buf_size": 4096, 00:24:40.697 "send_buf_size": 4096, 00:24:40.697 "enable_recv_pipe": true, 00:24:40.697 "enable_quickack": false, 00:24:40.697 "enable_placement_id": 0, 00:24:40.697 "enable_zerocopy_send_server": true, 00:24:40.697 "enable_zerocopy_send_client": false, 00:24:40.697 "zerocopy_threshold": 0, 00:24:40.697 "tls_version": 0, 00:24:40.697 "enable_ktls": false 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "sock_impl_set_options", 00:24:40.697 "params": { 00:24:40.697 "impl_name": "posix", 00:24:40.697 "recv_buf_size": 2097152, 00:24:40.697 "send_buf_size": 2097152, 00:24:40.697 "enable_recv_pipe": true, 00:24:40.697 "enable_quickack": false, 00:24:40.697 "enable_placement_id": 0, 00:24:40.697 "enable_zerocopy_send_server": true, 00:24:40.697 "enable_zerocopy_send_client": false, 00:24:40.697 "zerocopy_threshold": 0, 00:24:40.697 "tls_version": 0, 00:24:40.697 "enable_ktls": false 00:24:40.697 } 00:24:40.697 } 00:24:40.697 ] 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "subsystem": "vmd", 00:24:40.697 "config": [] 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "subsystem": "accel", 00:24:40.697 "config": [ 00:24:40.697 { 00:24:40.697 "method": "accel_set_options", 00:24:40.697 "params": { 00:24:40.697 "small_cache_size": 128, 00:24:40.697 "large_cache_size": 16, 00:24:40.697 "task_count": 2048, 00:24:40.697 "sequence_count": 2048, 00:24:40.697 "buf_count": 2048 00:24:40.697 } 00:24:40.697 } 00:24:40.697 ] 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "subsystem": "bdev", 00:24:40.697 "config": [ 00:24:40.697 { 00:24:40.697 "method": "bdev_set_options", 00:24:40.697 "params": { 00:24:40.697 "bdev_io_pool_size": 65535, 00:24:40.697 "bdev_io_cache_size": 256, 00:24:40.697 "bdev_auto_examine": true, 00:24:40.697 "iobuf_small_cache_size": 128, 00:24:40.697 "iobuf_large_cache_size": 16 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "bdev_raid_set_options", 00:24:40.697 "params": { 00:24:40.697 "process_window_size_kb": 1024, 00:24:40.697 "process_max_bandwidth_mb_sec": 0 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "bdev_iscsi_set_options", 00:24:40.697 "params": { 00:24:40.697 "timeout_sec": 30 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "bdev_nvme_set_options", 00:24:40.697 "params": { 00:24:40.697 "action_on_timeout": "none", 00:24:40.697 "timeout_us": 0, 00:24:40.697 "timeout_admin_us": 0, 00:24:40.697 "keep_alive_timeout_ms": 10000, 00:24:40.697 "arbitration_burst": 0, 00:24:40.697 "low_priority_weight": 0, 00:24:40.697 "medium_priority_weight": 0, 00:24:40.697 "high_priority_weight": 0, 00:24:40.697 "nvme_adminq_poll_period_us": 10000, 00:24:40.697 "nvme_ioq_poll_period_us": 0, 00:24:40.697 "io_queue_requests": 0, 00:24:40.697 "delay_cmd_submit": true, 00:24:40.697 "transport_retry_count": 4, 00:24:40.697 "bdev_retry_count": 3, 00:24:40.697 "transport_ack_timeout": 0, 00:24:40.697 "ctrlr_loss_timeout_sec": 0, 00:24:40.697 "reconnect_delay_sec": 0, 00:24:40.697 "fast_io_fail_timeout_sec": 0, 00:24:40.697 "disable_auto_failback": false, 00:24:40.697 "generate_uuids": false, 00:24:40.697 "transport_tos": 0, 00:24:40.697 "nvme_error_stat": false, 00:24:40.697 "rdma_srq_size": 0, 00:24:40.697 "io_path_stat": false, 00:24:40.697 "allow_accel_sequence": false, 00:24:40.697 "rdma_max_cq_size": 0, 00:24:40.697 "rdma_cm_event_timeout_ms": 0, 00:24:40.697 "dhchap_digests": [ 00:24:40.697 "sha256", 00:24:40.697 "sha384", 00:24:40.697 "sha512" 00:24:40.697 ], 00:24:40.697 "dhchap_dhgroups": [ 00:24:40.697 "null", 00:24:40.697 "ffdhe2048", 00:24:40.697 "ffdhe3072", 00:24:40.697 "ffdhe4096", 00:24:40.697 "ffdhe6144", 00:24:40.697 "ffdhe8192" 00:24:40.697 ] 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "bdev_nvme_set_hotplug", 00:24:40.697 "params": { 00:24:40.697 "period_us": 100000, 00:24:40.697 "enable": false 00:24:40.697 } 00:24:40.697 }, 00:24:40.697 { 00:24:40.697 "method": "bdev_malloc_create", 00:24:40.697 "params": { 00:24:40.697 "name": "malloc0", 00:24:40.697 "num_blocks": 8192, 00:24:40.698 "block_size": 4096, 00:24:40.698 "physical_block_size": 4096, 00:24:40.698 "uuid": "c3a552d1-b90f-4bec-ba67-b0ffc57b6556", 00:24:40.698 "optimal_io_boundary": 0, 00:24:40.698 "md_size": 0, 00:24:40.698 "dif_type": 0, 00:24:40.698 "dif_is_head_of_md": false, 00:24:40.698 "dif_pi_format": 0 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "bdev_wait_for_examine" 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "subsystem": "nbd", 00:24:40.698 "config": [] 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "subsystem": "scheduler", 00:24:40.698 "config": [ 00:24:40.698 { 00:24:40.698 "method": "framework_set_scheduler", 00:24:40.698 "params": { 00:24:40.698 "name": "static" 00:24:40.698 } 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "subsystem": "nvmf", 00:24:40.698 "config": [ 00:24:40.698 { 00:24:40.698 "method": "nvmf_set_config", 00:24:40.698 "params": { 00:24:40.698 "discovery_filter": "match_any", 00:24:40.698 "admin_cmd_passthru": { 00:24:40.698 "identify_ctrlr": false 00:24:40.698 }, 00:24:40.698 "dhchap_digests": [ 00:24:40.698 "sha256", 00:24:40.698 "sha384", 00:24:40.698 "sha512" 00:24:40.698 ], 00:24:40.698 "dhchap_dhgroups": [ 00:24:40.698 "null", 00:24:40.698 "ffdhe2048", 00:24:40.698 "ffdhe3072", 00:24:40.698 "ffdhe4096", 00:24:40.698 "ffdhe6144", 00:24:40.698 "ffdhe8192" 00:24:40.698 ] 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_set_max_subsystems", 00:24:40.698 "params": { 00:24:40.698 "max_subsystems": 1024 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_set_crdt", 00:24:40.698 "params": { 00:24:40.698 "crdt1": 0, 00:24:40.698 "crdt2": 0, 00:24:40.698 "crdt3": 0 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_create_transport", 00:24:40.698 "params": { 00:24:40.698 "trtype": "TCP", 00:24:40.698 "max_queue_depth": 128, 00:24:40.698 "max_io_qpairs_per_ctrlr": 127, 00:24:40.698 "in_capsule_data_size": 4096, 00:24:40.698 "max_io_size": 131072, 00:24:40.698 "io_unit_size": 131072, 00:24:40.698 "max_aq_depth": 128, 00:24:40.698 "num_shared_buffers": 511, 00:24:40.698 "buf_cache_size": 4294967295, 00:24:40.698 "dif_insert_or_strip": false, 00:24:40.698 "zcopy": false, 00:24:40.698 "c2h_success": false, 00:24:40.698 "sock_priority": 0, 00:24:40.698 "abort_timeout_sec": 1, 00:24:40.698 "ack_timeout": 0, 00:24:40.698 "data_wr_pool_size": 0 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_create_subsystem", 00:24:40.698 "params": { 00:24:40.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.698 "allow_any_host": false, 00:24:40.698 "serial_number": "00000000000000000000", 00:24:40.698 "model_number": "SPDK bdev Controller", 00:24:40.698 "max_namespaces": 32, 00:24:40.698 "min_cntlid": 1, 00:24:40.698 "max_cntlid": 65519, 00:24:40.698 "ana_reporting": false 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_subsystem_add_host", 00:24:40.698 "params": { 00:24:40.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.698 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.698 "psk": "key0" 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_subsystem_add_ns", 00:24:40.698 "params": { 00:24:40.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.698 "namespace": { 00:24:40.698 "nsid": 1, 00:24:40.698 "bdev_name": "malloc0", 00:24:40.698 "nguid": "C3A552D1B90F4BECBA67B0FFC57B6556", 00:24:40.698 "uuid": "c3a552d1-b90f-4bec-ba67-b0ffc57b6556", 00:24:40.698 "no_auto_visible": false 00:24:40.698 } 00:24:40.698 } 00:24:40.698 }, 00:24:40.698 { 00:24:40.698 "method": "nvmf_subsystem_add_listener", 00:24:40.698 "params": { 00:24:40.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.698 "listen_address": { 00:24:40.698 "trtype": "TCP", 00:24:40.698 "adrfam": "IPv4", 00:24:40.698 "traddr": "10.0.0.2", 00:24:40.698 "trsvcid": "4420" 00:24:40.698 }, 00:24:40.698 "secure_channel": false, 00:24:40.698 "sock_impl": "ssl" 00:24:40.698 } 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 }' 00:24:40.698 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:41.264 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:41.264 "subsystems": [ 00:24:41.264 { 00:24:41.264 "subsystem": "keyring", 00:24:41.264 "config": [ 00:24:41.264 { 00:24:41.264 "method": "keyring_file_add_key", 00:24:41.264 "params": { 00:24:41.264 "name": "key0", 00:24:41.264 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:41.264 } 00:24:41.264 } 00:24:41.264 ] 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "subsystem": "iobuf", 00:24:41.264 "config": [ 00:24:41.264 { 00:24:41.264 "method": "iobuf_set_options", 00:24:41.264 "params": { 00:24:41.264 "small_pool_count": 8192, 00:24:41.264 "large_pool_count": 1024, 00:24:41.264 "small_bufsize": 8192, 00:24:41.264 "large_bufsize": 135168 00:24:41.264 } 00:24:41.264 } 00:24:41.264 ] 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "subsystem": "sock", 00:24:41.264 "config": [ 00:24:41.264 { 00:24:41.264 "method": "sock_set_default_impl", 00:24:41.264 "params": { 00:24:41.264 "impl_name": "posix" 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "sock_impl_set_options", 00:24:41.264 "params": { 00:24:41.264 "impl_name": "ssl", 00:24:41.264 "recv_buf_size": 4096, 00:24:41.264 "send_buf_size": 4096, 00:24:41.264 "enable_recv_pipe": true, 00:24:41.264 "enable_quickack": false, 00:24:41.264 "enable_placement_id": 0, 00:24:41.264 "enable_zerocopy_send_server": true, 00:24:41.264 "enable_zerocopy_send_client": false, 00:24:41.264 "zerocopy_threshold": 0, 00:24:41.264 "tls_version": 0, 00:24:41.264 "enable_ktls": false 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "sock_impl_set_options", 00:24:41.264 "params": { 00:24:41.264 "impl_name": "posix", 00:24:41.264 "recv_buf_size": 2097152, 00:24:41.264 "send_buf_size": 2097152, 00:24:41.264 "enable_recv_pipe": true, 00:24:41.264 "enable_quickack": false, 00:24:41.264 "enable_placement_id": 0, 00:24:41.264 "enable_zerocopy_send_server": true, 00:24:41.264 "enable_zerocopy_send_client": false, 00:24:41.264 "zerocopy_threshold": 0, 00:24:41.264 "tls_version": 0, 00:24:41.264 "enable_ktls": false 00:24:41.264 } 00:24:41.264 } 00:24:41.264 ] 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "subsystem": "vmd", 00:24:41.264 "config": [] 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "subsystem": "accel", 00:24:41.264 "config": [ 00:24:41.264 { 00:24:41.264 "method": "accel_set_options", 00:24:41.264 "params": { 00:24:41.264 "small_cache_size": 128, 00:24:41.264 "large_cache_size": 16, 00:24:41.264 "task_count": 2048, 00:24:41.264 "sequence_count": 2048, 00:24:41.264 "buf_count": 2048 00:24:41.264 } 00:24:41.264 } 00:24:41.264 ] 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "subsystem": "bdev", 00:24:41.264 "config": [ 00:24:41.264 { 00:24:41.264 "method": "bdev_set_options", 00:24:41.264 "params": { 00:24:41.264 "bdev_io_pool_size": 65535, 00:24:41.264 "bdev_io_cache_size": 256, 00:24:41.264 "bdev_auto_examine": true, 00:24:41.264 "iobuf_small_cache_size": 128, 00:24:41.264 "iobuf_large_cache_size": 16 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_raid_set_options", 00:24:41.264 "params": { 00:24:41.264 "process_window_size_kb": 1024, 00:24:41.264 "process_max_bandwidth_mb_sec": 0 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_iscsi_set_options", 00:24:41.264 "params": { 00:24:41.264 "timeout_sec": 30 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_nvme_set_options", 00:24:41.264 "params": { 00:24:41.264 "action_on_timeout": "none", 00:24:41.264 "timeout_us": 0, 00:24:41.264 "timeout_admin_us": 0, 00:24:41.264 "keep_alive_timeout_ms": 10000, 00:24:41.264 "arbitration_burst": 0, 00:24:41.264 "low_priority_weight": 0, 00:24:41.264 "medium_priority_weight": 0, 00:24:41.264 "high_priority_weight": 0, 00:24:41.264 "nvme_adminq_poll_period_us": 10000, 00:24:41.264 "nvme_ioq_poll_period_us": 0, 00:24:41.264 "io_queue_requests": 512, 00:24:41.264 "delay_cmd_submit": true, 00:24:41.264 "transport_retry_count": 4, 00:24:41.264 "bdev_retry_count": 3, 00:24:41.264 "transport_ack_timeout": 0, 00:24:41.264 "ctrlr_loss_timeout_sec": 0, 00:24:41.264 "reconnect_delay_sec": 0, 00:24:41.264 "fast_io_fail_timeout_sec": 0, 00:24:41.264 "disable_auto_failback": false, 00:24:41.264 "generate_uuids": false, 00:24:41.264 "transport_tos": 0, 00:24:41.264 "nvme_error_stat": false, 00:24:41.264 "rdma_srq_size": 0, 00:24:41.264 "io_path_stat": false, 00:24:41.264 "allow_accel_sequence": false, 00:24:41.264 "rdma_max_cq_size": 0, 00:24:41.264 "rdma_cm_event_timeout_ms": 0, 00:24:41.264 "dhchap_digests": [ 00:24:41.264 "sha256", 00:24:41.264 "sha384", 00:24:41.264 "sha512" 00:24:41.264 ], 00:24:41.264 "dhchap_dhgroups": [ 00:24:41.264 "null", 00:24:41.264 "ffdhe2048", 00:24:41.264 "ffdhe3072", 00:24:41.264 "ffdhe4096", 00:24:41.264 "ffdhe6144", 00:24:41.264 "ffdhe8192" 00:24:41.264 ] 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_nvme_attach_controller", 00:24:41.264 "params": { 00:24:41.264 "name": "nvme0", 00:24:41.264 "trtype": "TCP", 00:24:41.264 "adrfam": "IPv4", 00:24:41.264 "traddr": "10.0.0.2", 00:24:41.264 "trsvcid": "4420", 00:24:41.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.264 "prchk_reftag": false, 00:24:41.264 "prchk_guard": false, 00:24:41.264 "ctrlr_loss_timeout_sec": 0, 00:24:41.264 "reconnect_delay_sec": 0, 00:24:41.264 "fast_io_fail_timeout_sec": 0, 00:24:41.264 "psk": "key0", 00:24:41.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.264 "hdgst": false, 00:24:41.264 "ddgst": false 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_nvme_set_hotplug", 00:24:41.264 "params": { 00:24:41.264 "period_us": 100000, 00:24:41.264 "enable": false 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_enable_histogram", 00:24:41.264 "params": { 00:24:41.264 "name": "nvme0n1", 00:24:41.264 "enable": true 00:24:41.264 } 00:24:41.264 }, 00:24:41.264 { 00:24:41.264 "method": "bdev_wait_for_examine" 00:24:41.264 } 00:24:41.264 ] 00:24:41.264 }, 00:24:41.265 { 00:24:41.265 "subsystem": "nbd", 00:24:41.265 "config": [] 00:24:41.265 } 00:24:41.265 ] 00:24:41.265 }' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1415244 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1415244 ']' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1415244 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415244 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415244' 00:24:41.265 killing process with pid 1415244 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1415244 00:24:41.265 Received shutdown signal, test time was about 1.000000 seconds 00:24:41.265 00:24:41.265 Latency(us) 00:24:41.265 [2024-11-02T13:40:33.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.265 [2024-11-02T13:40:33.320Z] =================================================================================================================== 00:24:41.265 [2024-11-02T13:40:33.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1415244 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1415219 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1415219 ']' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1415219 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.265 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415219 00:24:41.524 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.524 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.524 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415219' 00:24:41.524 killing process with pid 1415219 00:24:41.524 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1415219 00:24:41.524 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1415219 00:24:41.783 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:41.783 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:41.783 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:41.783 "subsystems": [ 00:24:41.783 { 00:24:41.783 "subsystem": "keyring", 00:24:41.783 "config": [ 00:24:41.783 { 00:24:41.783 "method": "keyring_file_add_key", 00:24:41.783 "params": { 00:24:41.783 "name": "key0", 00:24:41.783 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:41.783 } 00:24:41.783 } 00:24:41.783 ] 00:24:41.783 }, 00:24:41.783 { 00:24:41.783 "subsystem": "iobuf", 00:24:41.783 "config": [ 00:24:41.783 { 00:24:41.783 "method": "iobuf_set_options", 00:24:41.783 "params": { 00:24:41.783 "small_pool_count": 8192, 00:24:41.783 "large_pool_count": 1024, 00:24:41.783 "small_bufsize": 8192, 00:24:41.783 "large_bufsize": 135168 00:24:41.783 } 00:24:41.783 } 00:24:41.783 ] 00:24:41.783 }, 00:24:41.783 { 00:24:41.783 "subsystem": "sock", 00:24:41.783 "config": [ 00:24:41.783 { 00:24:41.783 "method": "sock_set_default_impl", 00:24:41.783 "params": { 00:24:41.783 "impl_name": "posix" 00:24:41.783 } 00:24:41.783 }, 00:24:41.783 { 00:24:41.783 "method": "sock_impl_set_options", 00:24:41.783 "params": { 00:24:41.783 "impl_name": "ssl", 00:24:41.783 "recv_buf_size": 4096, 00:24:41.783 "send_buf_size": 4096, 00:24:41.783 "enable_recv_pipe": true, 00:24:41.783 "enable_quickack": false, 00:24:41.783 "enable_placement_id": 0, 00:24:41.783 "enable_zerocopy_send_server": true, 00:24:41.783 "enable_zerocopy_send_client": false, 00:24:41.783 "zerocopy_threshold": 0, 00:24:41.783 "tls_version": 0, 00:24:41.783 "enable_ktls": false 00:24:41.783 } 00:24:41.783 }, 00:24:41.783 { 00:24:41.783 "method": "sock_impl_set_options", 00:24:41.783 "params": { 00:24:41.783 "impl_name": "posix", 00:24:41.783 "recv_buf_size": 2097152, 00:24:41.783 "send_buf_size": 2097152, 00:24:41.783 "enable_recv_pipe": true, 00:24:41.783 "enable_quickack": false, 00:24:41.783 "enable_placement_id": 0, 00:24:41.783 "enable_zerocopy_send_server": true, 00:24:41.783 "enable_zerocopy_send_client": false, 00:24:41.783 "zerocopy_threshold": 0, 00:24:41.783 "tls_version": 0, 00:24:41.783 "enable_ktls": false 00:24:41.783 } 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "vmd", 00:24:41.784 "config": [] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "accel", 00:24:41.784 "config": [ 00:24:41.784 { 00:24:41.784 "method": "accel_set_options", 00:24:41.784 "params": { 00:24:41.784 "small_cache_size": 128, 00:24:41.784 "large_cache_size": 16, 00:24:41.784 "task_count": 2048, 00:24:41.784 "sequence_count": 2048, 00:24:41.784 "buf_count": 2048 00:24:41.784 } 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "bdev", 00:24:41.784 "config": [ 00:24:41.784 { 00:24:41.784 "method": "bdev_set_options", 00:24:41.784 "params": { 00:24:41.784 "bdev_io_pool_size": 65535, 00:24:41.784 "bdev_io_cache_size": 256, 00:24:41.784 "bdev_auto_examine": true, 00:24:41.784 "iobuf_small_cache_size": 128, 00:24:41.784 "iobuf_large_cache_size": 16 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_raid_set_options", 00:24:41.784 "params": { 00:24:41.784 "process_window_size_kb": 1024, 00:24:41.784 "process_max_bandwidth_mb_sec": 0 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_iscsi_set_options", 00:24:41.784 "params": { 00:24:41.784 "timeout_sec": 30 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_nvme_set_options", 00:24:41.784 "params": { 00:24:41.784 "action_on_timeout": "none", 00:24:41.784 "timeout_us": 0, 00:24:41.784 "timeout_admin_us": 0, 00:24:41.784 "keep_alive_timeout_ms": 10000, 00:24:41.784 "arbitration_burst": 0, 00:24:41.784 "low_priority_weight": 0, 00:24:41.784 "medium_priority_weight": 0, 00:24:41.784 "high_priority_weight": 0, 00:24:41.784 "nvme_adminq_poll_period_us": 10000, 00:24:41.784 "nvme_ioq_poll_period_us": 0, 00:24:41.784 "io_queue_requests": 0, 00:24:41.784 "delay_cmd_submit": true, 00:24:41.784 "transport_retry_count": 4, 00:24:41.784 "bdev_retry_count": 3, 00:24:41.784 "transport_ack_timeout": 0, 00:24:41.784 "ctrlr_loss_timeout_sec": 0, 00:24:41.784 "reconnect_delay_sec": 0, 00:24:41.784 "fast_io_fail_timeout_sec": 0, 00:24:41.784 "disable_auto_failback": false, 00:24:41.784 "generate_uuids": false, 00:24:41.784 "transport_tos": 0, 00:24:41.784 "nvme_error_stat": false, 00:24:41.784 "rdma_srq_size": 0, 00:24:41.784 "io_path_stat": false, 00:24:41.784 "allow_accel_sequence": false, 00:24:41.784 "rdma_max_cq_size": 0, 00:24:41.784 "rdma_cm_event_timeout_ms": 0, 00:24:41.784 "dhchap_digests": [ 00:24:41.784 "sha256", 00:24:41.784 "sha384", 00:24:41.784 "sha512" 00:24:41.784 ], 00:24:41.784 "dhchap_dhgroups": [ 00:24:41.784 "null", 00:24:41.784 "ffdhe2048", 00:24:41.784 "ffdhe3072", 00:24:41.784 "ffdhe4096", 00:24:41.784 "ffdhe6144", 00:24:41.784 "ffdhe8192" 00:24:41.784 ] 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_nvme_set_hotplug", 00:24:41.784 "params": { 00:24:41.784 "period_us": 100000, 00:24:41.784 "enable": false 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_malloc_create", 00:24:41.784 "params": { 00:24:41.784 "name": "malloc0", 00:24:41.784 "num_blocks": 8192, 00:24:41.784 "block_size": 4096, 00:24:41.784 "physical_block_size": 4096, 00:24:41.784 "uuid": "c3a552d1-b90f-4bec-ba67-b0ffc57b6556", 00:24:41.784 "optimal_io_boundary": 0, 00:24:41.784 "md_size": 0, 00:24:41.784 "dif_type": 0, 00:24:41.784 "dif_is_head_of_md": false, 00:24:41.784 "dif_pi_format": 0 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "bdev_wait_for_examine" 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "nbd", 00:24:41.784 "config": [] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "scheduler", 00:24:41.784 "config": [ 00:24:41.784 { 00:24:41.784 "method": "framework_set_scheduler", 00:24:41.784 "params": { 00:24:41.784 "name": "static" 00:24:41.784 } 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "subsystem": "nvmf", 00:24:41.784 "config": [ 00:24:41.784 { 00:24:41.784 "method": "nvmf_set_config", 00:24:41.784 "params": { 00:24:41.784 "discovery_filter": "match_any", 00:24:41.784 "admin_cmd_passthru": { 00:24:41.784 "identify_ctrlr": false 00:24:41.784 }, 00:24:41.784 "dhchap_digests": [ 00:24:41.784 "sha256", 00:24:41.784 "sha384", 00:24:41.784 "sha512" 00:24:41.784 ], 00:24:41.784 "dhchap_dhgroups": [ 00:24:41.784 "null", 00:24:41.784 "ffdhe2048", 00:24:41.784 "ffdhe3072", 00:24:41.784 "ffdhe4096", 00:24:41.784 "ffdhe6144", 00:24:41.784 "ffdhe8192" 00:24:41.784 ] 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_set_max_subsystems", 00:24:41.784 "params": { 00:24:41.784 "max_subsystems": 1024 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_set_crdt", 00:24:41.784 "params": { 00:24:41.784 "crdt1": 0, 00:24:41.784 "crdt2": 0, 00:24:41.784 "crdt3": 0 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_create_transport", 00:24:41.784 "params": { 00:24:41.784 "trtype": "TCP", 00:24:41.784 "max_queue_depth": 128, 00:24:41.784 "max_io_qpairs_per_ctrlr": 127, 00:24:41.784 "in_capsule_data_size": 4096, 00:24:41.784 "max_io_size": 131072, 00:24:41.784 "io_unit_size": 131072, 00:24:41.784 "max_aq_depth": 128, 00:24:41.784 "num_shared_buffers": 511, 00:24:41.784 "buf_cache_size": 4294967295, 00:24:41.784 "dif_insert_or_strip": false, 00:24:41.784 "zcopy": false, 00:24:41.784 "c2h_success": false, 00:24:41.784 "sock_priority": 0, 00:24:41.784 "abort_timeout_sec": 1, 00:24:41.784 "ack_timeout": 0, 00:24:41.784 "data_wr_pool_size": 0 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_create_subsystem", 00:24:41.784 "params": { 00:24:41.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.784 "allow_any_host": false, 00:24:41.784 "serial_number": "00000000000000000000", 00:24:41.784 "model_number": "SPDK bdev Controller", 00:24:41.784 "max_namespaces": 32, 00:24:41.784 "min_cntlid": 1, 00:24:41.784 "max_cntlid": 65519, 00:24:41.784 "ana_reporting": false 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_subsystem_add_host", 00:24:41.784 "params": { 00:24:41.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.784 "host": "nqn.2016-06.io.spdk:host1", 00:24:41.784 "psk": "key0" 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_subsystem_add_ns", 00:24:41.784 "params": { 00:24:41.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.784 "namespace": { 00:24:41.784 "nsid": 1, 00:24:41.784 "bdev_name": "malloc0", 00:24:41.784 "nguid": "C3A552D1B90F4BECBA67B0FFC57B6556", 00:24:41.784 "uuid": "c3a552d1-b90f-4bec-ba67-b0ffc57b6556", 00:24:41.784 "no_auto_visible": false 00:24:41.784 } 00:24:41.784 } 00:24:41.784 }, 00:24:41.784 { 00:24:41.784 "method": "nvmf_subsystem_add_listener", 00:24:41.784 "params": { 00:24:41.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.784 "listen_address": { 00:24:41.784 "trtype": "TCP", 00:24:41.784 "adrfam": "IPv4", 00:24:41.784 "traddr": "10.0.0.2", 00:24:41.784 "trsvcid": "4420" 00:24:41.784 }, 00:24:41.784 "secure_channel": false, 00:24:41.784 "sock_impl": "ssl" 00:24:41.784 } 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 } 00:24:41.784 ] 00:24:41.784 }' 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1415650 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1415650 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1415650 ']' 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.784 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.784 [2024-11-02 14:40:33.662116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:41.784 [2024-11-02 14:40:33.662218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.784 [2024-11-02 14:40:33.729750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.784 [2024-11-02 14:40:33.819914] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.784 [2024-11-02 14:40:33.819979] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.785 [2024-11-02 14:40:33.819995] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.785 [2024-11-02 14:40:33.820009] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.785 [2024-11-02 14:40:33.820020] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.785 [2024-11-02 14:40:33.820116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.044 [2024-11-02 14:40:34.077079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.304 [2024-11-02 14:40:34.109100] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.304 [2024-11-02 14:40:34.109381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1415801 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1415801 /var/tmp/bdevperf.sock 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1415801 ']' 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.871 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:42.871 "subsystems": [ 00:24:42.871 { 00:24:42.871 "subsystem": "keyring", 00:24:42.871 "config": [ 00:24:42.871 { 00:24:42.871 "method": "keyring_file_add_key", 00:24:42.871 "params": { 00:24:42.871 "name": "key0", 00:24:42.871 "path": "/tmp/tmp.wYcaPlBIdj" 00:24:42.871 } 00:24:42.871 } 00:24:42.871 ] 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "subsystem": "iobuf", 00:24:42.871 "config": [ 00:24:42.871 { 00:24:42.871 "method": "iobuf_set_options", 00:24:42.871 "params": { 00:24:42.871 "small_pool_count": 8192, 00:24:42.871 "large_pool_count": 1024, 00:24:42.871 "small_bufsize": 8192, 00:24:42.871 "large_bufsize": 135168 00:24:42.871 } 00:24:42.871 } 00:24:42.871 ] 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "subsystem": "sock", 00:24:42.871 "config": [ 00:24:42.871 { 00:24:42.871 "method": "sock_set_default_impl", 00:24:42.871 "params": { 00:24:42.871 "impl_name": "posix" 00:24:42.871 } 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "method": "sock_impl_set_options", 00:24:42.871 "params": { 00:24:42.871 "impl_name": "ssl", 00:24:42.871 "recv_buf_size": 4096, 00:24:42.871 "send_buf_size": 4096, 00:24:42.871 "enable_recv_pipe": true, 00:24:42.871 "enable_quickack": false, 00:24:42.871 "enable_placement_id": 0, 00:24:42.871 "enable_zerocopy_send_server": true, 00:24:42.871 "enable_zerocopy_send_client": false, 00:24:42.871 "zerocopy_threshold": 0, 00:24:42.871 "tls_version": 0, 00:24:42.871 "enable_ktls": false 00:24:42.871 } 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "method": "sock_impl_set_options", 00:24:42.871 "params": { 00:24:42.871 "impl_name": "posix", 00:24:42.871 "recv_buf_size": 2097152, 00:24:42.871 "send_buf_size": 2097152, 00:24:42.871 "enable_recv_pipe": true, 00:24:42.871 "enable_quickack": false, 00:24:42.871 "enable_placement_id": 0, 00:24:42.871 "enable_zerocopy_send_server": true, 00:24:42.871 "enable_zerocopy_send_client": false, 00:24:42.871 "zerocopy_threshold": 0, 00:24:42.871 "tls_version": 0, 00:24:42.871 "enable_ktls": false 00:24:42.871 } 00:24:42.871 } 00:24:42.871 ] 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "subsystem": "vmd", 00:24:42.871 "config": [] 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "subsystem": "accel", 00:24:42.871 "config": [ 00:24:42.871 { 00:24:42.871 "method": "accel_set_options", 00:24:42.871 "params": { 00:24:42.871 "small_cache_size": 128, 00:24:42.871 "large_cache_size": 16, 00:24:42.871 "task_count": 2048, 00:24:42.871 "sequence_count": 2048, 00:24:42.871 "buf_count": 2048 00:24:42.871 } 00:24:42.871 } 00:24:42.871 ] 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "subsystem": "bdev", 00:24:42.871 "config": [ 00:24:42.871 { 00:24:42.871 "method": "bdev_set_options", 00:24:42.871 "params": { 00:24:42.871 "bdev_io_pool_size": 65535, 00:24:42.871 "bdev_io_cache_size": 256, 00:24:42.871 "bdev_auto_examine": true, 00:24:42.871 "iobuf_small_cache_size": 128, 00:24:42.871 "iobuf_large_cache_size": 16 00:24:42.871 } 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "method": "bdev_raid_set_options", 00:24:42.871 "params": { 00:24:42.871 "process_window_size_kb": 1024, 00:24:42.871 "process_max_bandwidth_mb_sec": 0 00:24:42.871 } 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "method": "bdev_iscsi_set_options", 00:24:42.871 "params": { 00:24:42.871 "timeout_sec": 30 00:24:42.871 } 00:24:42.871 }, 00:24:42.871 { 00:24:42.871 "method": "bdev_nvme_set_options", 00:24:42.871 "params": { 00:24:42.872 "action_on_timeout": "none", 00:24:42.872 "timeout_us": 0, 00:24:42.872 "timeout_admin_us": 0, 00:24:42.872 "keep_alive_timeout_ms": 10000, 00:24:42.872 "arbitration_burst": 0, 00:24:42.872 "low_priority_weight": 0, 00:24:42.872 "medium_priority_weight": 0, 00:24:42.872 "high_priority_weight": 0, 00:24:42.872 "nvme_adminq_poll_period_us": 10000, 00:24:42.872 "nvme_ioq_poll_period_us": 0, 00:24:42.872 "io_queue_requests": 512, 00:24:42.872 "delay_cmd_submit": true, 00:24:42.872 "transport_retry_count": 4, 00:24:42.872 "bdev_retry_count": 3, 00:24:42.872 "transport_ack_timeout": 0, 00:24:42.872 "ctrlr_loss_timeout_sec": 0, 00:24:42.872 "reconnect_delay_sec": 0, 00:24:42.872 "fast_io_fail_timeout_sec": 0, 00:24:42.872 "disable_auto_failback": false, 00:24:42.872 "generate_uuids": false, 00:24:42.872 "transport_tos": 0, 00:24:42.872 "nvme_error_stat": false, 00:24:42.872 "rdma_srq_size": 0, 00:24:42.872 "io_path_stat": false, 00:24:42.872 "allow_accel_sequence": false, 00:24:42.872 "rdma_max_cq_size": 0, 00:24:42.872 "rdma_cm_event_timeout_ms": 0, 00:24:42.872 "dhchap_digests": [ 00:24:42.872 "sha256", 00:24:42.872 "sha384", 00:24:42.872 "sha512" 00:24:42.872 ], 00:24:42.872 "dhchap_dhgroups": [ 00:24:42.872 "null", 00:24:42.872 "ffdhe2048", 00:24:42.872 "ffdhe3072", 00:24:42.872 "ffdhe4096", 00:24:42.872 "ffdhe6144", 00:24:42.872 "ffdhe8192" 00:24:42.872 ] 00:24:42.872 } 00:24:42.872 }, 00:24:42.872 { 00:24:42.872 "method": "bdev_nvme_attach_controller", 00:24:42.872 "params": { 00:24:42.872 "name": "nvme0", 00:24:42.872 "trtype": "TCP", 00:24:42.872 "adrfam": "IPv4", 00:24:42.872 "traddr": "10.0.0.2", 00:24:42.872 "trsvcid": "4420", 00:24:42.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.872 "prchk_reftag": false, 00:24:42.872 "prchk_guard": false, 00:24:42.872 "ctrlr_loss_timeout_sec": 0, 00:24:42.872 "reconnect_delay_sec": 0, 00:24:42.872 "fast_io_fail_timeout_sec": 0, 00:24:42.872 "psk": "key0", 00:24:42.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.872 "hdgst": false, 00:24:42.872 "ddgst": false 00:24:42.872 } 00:24:42.872 }, 00:24:42.872 { 00:24:42.872 "method": "bdev_nvme_set_hotplug", 00:24:42.872 "params": { 00:24:42.872 "period_us": 100000, 00:24:42.872 "enable": false 00:24:42.872 } 00:24:42.872 }, 00:24:42.872 { 00:24:42.872 "method": "bdev_enable_histogram", 00:24:42.872 "params": { 00:24:42.872 "name": "nvme0n1", 00:24:42.872 "enable": true 00:24:42.872 } 00:24:42.872 }, 00:24:42.872 { 00:24:42.872 "method": "bdev_wait_for_examine" 00:24:42.872 } 00:24:42.872 ] 00:24:42.872 }, 00:24:42.872 { 00:24:42.872 "subsystem": "nbd", 00:24:42.872 "config": [] 00:24:42.872 } 00:24:42.872 ] 00:24:42.872 }' 00:24:42.872 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.872 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.872 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.872 [2024-11-02 14:40:34.779303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:42.872 [2024-11-02 14:40:34.779387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415801 ] 00:24:42.872 [2024-11-02 14:40:34.840929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.132 [2024-11-02 14:40:34.927707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.132 [2024-11-02 14:40:35.108640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.066 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.066 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:44.066 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.066 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:44.066 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.066 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.323 Running I/O for 1 seconds... 00:24:45.259 2162.00 IOPS, 8.45 MiB/s 00:24:45.259 Latency(us) 00:24:45.259 [2024-11-02T13:40:37.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.259 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:45.259 Verification LBA range: start 0x0 length 0x2000 00:24:45.259 nvme0n1 : 1.06 2170.75 8.48 0.00 0.00 57690.86 6650.69 85051.16 00:24:45.259 [2024-11-02T13:40:37.314Z] =================================================================================================================== 00:24:45.259 [2024-11-02T13:40:37.315Z] Total : 2170.75 8.48 0.00 0.00 57690.86 6650.69 85051.16 00:24:45.260 { 00:24:45.260 "results": [ 00:24:45.260 { 00:24:45.260 "job": "nvme0n1", 00:24:45.260 "core_mask": "0x2", 00:24:45.260 "workload": "verify", 00:24:45.260 "status": "finished", 00:24:45.260 "verify_range": { 00:24:45.260 "start": 0, 00:24:45.260 "length": 8192 00:24:45.260 }, 00:24:45.260 "queue_depth": 128, 00:24:45.260 "io_size": 4096, 00:24:45.260 "runtime": 1.055396, 00:24:45.260 "iops": 2170.749178507404, 00:24:45.260 "mibps": 8.479488978544547, 00:24:45.260 "io_failed": 0, 00:24:45.260 "io_timeout": 0, 00:24:45.260 "avg_latency_us": 57690.856513571634, 00:24:45.260 "min_latency_us": 6650.69037037037, 00:24:45.260 "max_latency_us": 85051.16444444444 00:24:45.260 } 00:24:45.260 ], 00:24:45.260 "core_count": 1 00:24:45.260 } 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:45.260 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:45.260 nvmf_trace.0 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1415801 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1415801 ']' 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1415801 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415801 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:45.519 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:45.520 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415801' 00:24:45.520 killing process with pid 1415801 00:24:45.520 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1415801 00:24:45.520 Received shutdown signal, test time was about 1.000000 seconds 00:24:45.520 00:24:45.520 Latency(us) 00:24:45.520 [2024-11-02T13:40:37.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.520 [2024-11-02T13:40:37.575Z] =================================================================================================================== 00:24:45.520 [2024-11-02T13:40:37.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.520 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1415801 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.780 rmmod nvme_tcp 00:24:45.780 rmmod nvme_fabrics 00:24:45.780 rmmod nvme_keyring 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 1415650 ']' 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 1415650 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1415650 ']' 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1415650 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415650 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415650' 00:24:45.780 killing process with pid 1415650 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1415650 00:24:45.780 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1415650 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.040 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Qtb9B8GO03 /tmp/tmp.qJAxBsBD4K /tmp/tmp.wYcaPlBIdj 00:24:48.580 00:24:48.580 real 1m24.896s 00:24:48.580 user 2m19.982s 00:24:48.580 sys 0m28.041s 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.580 ************************************ 00:24:48.580 END TEST nvmf_tls 00:24:48.580 ************************************ 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:48.580 ************************************ 00:24:48.580 START TEST nvmf_fips 00:24:48.580 ************************************ 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:48.580 * Looking for test storage... 00:24:48.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.580 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:48.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.581 --rc genhtml_branch_coverage=1 00:24:48.581 --rc genhtml_function_coverage=1 00:24:48.581 --rc genhtml_legend=1 00:24:48.581 --rc geninfo_all_blocks=1 00:24:48.581 --rc geninfo_unexecuted_blocks=1 00:24:48.581 00:24:48.581 ' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.581 --rc genhtml_branch_coverage=1 00:24:48.581 --rc genhtml_function_coverage=1 00:24:48.581 --rc genhtml_legend=1 00:24:48.581 --rc geninfo_all_blocks=1 00:24:48.581 --rc geninfo_unexecuted_blocks=1 00:24:48.581 00:24:48.581 ' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.581 --rc genhtml_branch_coverage=1 00:24:48.581 --rc genhtml_function_coverage=1 00:24:48.581 --rc genhtml_legend=1 00:24:48.581 --rc geninfo_all_blocks=1 00:24:48.581 --rc geninfo_unexecuted_blocks=1 00:24:48.581 00:24:48.581 ' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.581 --rc genhtml_branch_coverage=1 00:24:48.581 --rc genhtml_function_coverage=1 00:24:48.581 --rc genhtml_legend=1 00:24:48.581 --rc geninfo_all_blocks=1 00:24:48.581 --rc geninfo_unexecuted_blocks=1 00:24:48.581 00:24:48.581 ' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:48.581 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:48.582 Error setting digest 00:24:48.582 40220CA75B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:48.582 40220CA75B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.582 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:50.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:50.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:50.487 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:50.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:50.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:24:50.488 00:24:50.488 --- 10.0.0.2 ping statistics --- 00:24:50.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.488 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:50.488 00:24:50.488 --- 10.0.0.1 ping statistics --- 00:24:50.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.488 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=1418165 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 1418165 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1418165 ']' 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.488 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:50.748 [2024-11-02 14:40:42.546682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:50.748 [2024-11-02 14:40:42.546779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.748 [2024-11-02 14:40:42.616415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.748 [2024-11-02 14:40:42.705688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.748 [2024-11-02 14:40:42.705753] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.748 [2024-11-02 14:40:42.705769] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.748 [2024-11-02 14:40:42.705782] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.748 [2024-11-02 14:40:42.705794] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.748 [2024-11-02 14:40:42.705834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.005 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.005 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:51.005 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.m9y 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.m9y 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.m9y 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.m9y 00:24:51.006 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.263 [2024-11-02 14:40:43.112399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.263 [2024-11-02 14:40:43.128404] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.264 [2024-11-02 14:40:43.128642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.264 malloc0 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1418201 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1418201 /var/tmp/bdevperf.sock 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1418201 ']' 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.264 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.264 [2024-11-02 14:40:43.272557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:51.264 [2024-11-02 14:40:43.272642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418201 ] 00:24:51.520 [2024-11-02 14:40:43.330424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.520 [2024-11-02 14:40:43.421234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.520 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.520 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:51.520 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.m9y 00:24:51.777 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:52.033 [2024-11-02 14:40:44.045376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.292 TLSTESTn1 00:24:52.292 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.292 Running I/O for 10 seconds... 00:24:54.602 3085.00 IOPS, 12.05 MiB/s [2024-11-02T13:40:47.296Z] 3230.00 IOPS, 12.62 MiB/s [2024-11-02T13:40:48.674Z] 3248.67 IOPS, 12.69 MiB/s [2024-11-02T13:40:49.613Z] 3249.50 IOPS, 12.69 MiB/s [2024-11-02T13:40:50.552Z] 3262.80 IOPS, 12.75 MiB/s [2024-11-02T13:40:51.486Z] 3272.17 IOPS, 12.78 MiB/s [2024-11-02T13:40:52.419Z] 3282.14 IOPS, 12.82 MiB/s [2024-11-02T13:40:53.354Z] 3287.88 IOPS, 12.84 MiB/s [2024-11-02T13:40:54.291Z] 3284.89 IOPS, 12.83 MiB/s [2024-11-02T13:40:54.551Z] 3279.40 IOPS, 12.81 MiB/s 00:25:02.496 Latency(us) 00:25:02.496 [2024-11-02T13:40:54.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.496 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.496 Verification LBA range: start 0x0 length 0x2000 00:25:02.496 TLSTESTn1 : 10.04 3278.64 12.81 0.00 0.00 38943.87 10048.85 55535.69 00:25:02.496 [2024-11-02T13:40:54.551Z] =================================================================================================================== 00:25:02.496 [2024-11-02T13:40:54.551Z] Total : 3278.64 12.81 0.00 0.00 38943.87 10048.85 55535.69 00:25:02.496 { 00:25:02.496 "results": [ 00:25:02.496 { 00:25:02.496 "job": "TLSTESTn1", 00:25:02.496 "core_mask": "0x4", 00:25:02.496 "workload": "verify", 00:25:02.496 "status": "finished", 00:25:02.496 "verify_range": { 00:25:02.496 "start": 0, 00:25:02.496 "length": 8192 00:25:02.496 }, 00:25:02.496 "queue_depth": 128, 00:25:02.496 "io_size": 4096, 00:25:02.496 "runtime": 10.040744, 00:25:02.496 "iops": 3278.641503059933, 00:25:02.496 "mibps": 12.807193371327863, 00:25:02.496 "io_failed": 0, 00:25:02.496 "io_timeout": 0, 00:25:02.496 "avg_latency_us": 38943.87289284911, 00:25:02.496 "min_latency_us": 10048.853333333333, 00:25:02.496 "max_latency_us": 55535.69185185185 00:25:02.496 } 00:25:02.496 ], 00:25:02.496 "core_count": 1 00:25:02.496 } 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:02.496 nvmf_trace.0 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1418201 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1418201 ']' 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1418201 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418201 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418201' 00:25:02.496 killing process with pid 1418201 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1418201 00:25:02.496 Received shutdown signal, test time was about 10.000000 seconds 00:25:02.496 00:25:02.496 Latency(us) 00:25:02.496 [2024-11-02T13:40:54.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.496 [2024-11-02T13:40:54.551Z] =================================================================================================================== 00:25:02.496 [2024-11-02T13:40:54.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.496 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1418201 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.756 rmmod nvme_tcp 00:25:02.756 rmmod nvme_fabrics 00:25:02.756 rmmod nvme_keyring 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 1418165 ']' 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 1418165 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1418165 ']' 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1418165 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418165 00:25:02.756 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:02.757 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:02.757 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418165' 00:25:02.757 killing process with pid 1418165 00:25:02.757 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1418165 00:25:02.757 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1418165 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:03.015 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:03.015 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.016 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.016 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.016 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.016 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.m9y 00:25:05.549 00:25:05.549 real 0m16.989s 00:25:05.549 user 0m21.560s 00:25:05.549 sys 0m6.209s 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:05.549 ************************************ 00:25:05.549 END TEST nvmf_fips 00:25:05.549 ************************************ 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.549 ************************************ 00:25:05.549 START TEST nvmf_control_msg_list 00:25:05.549 ************************************ 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.549 * Looking for test storage... 00:25:05.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.549 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:05.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.550 --rc genhtml_branch_coverage=1 00:25:05.550 --rc genhtml_function_coverage=1 00:25:05.550 --rc genhtml_legend=1 00:25:05.550 --rc geninfo_all_blocks=1 00:25:05.550 --rc geninfo_unexecuted_blocks=1 00:25:05.550 00:25:05.550 ' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:05.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.550 --rc genhtml_branch_coverage=1 00:25:05.550 --rc genhtml_function_coverage=1 00:25:05.550 --rc genhtml_legend=1 00:25:05.550 --rc geninfo_all_blocks=1 00:25:05.550 --rc geninfo_unexecuted_blocks=1 00:25:05.550 00:25:05.550 ' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:05.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.550 --rc genhtml_branch_coverage=1 00:25:05.550 --rc genhtml_function_coverage=1 00:25:05.550 --rc genhtml_legend=1 00:25:05.550 --rc geninfo_all_blocks=1 00:25:05.550 --rc geninfo_unexecuted_blocks=1 00:25:05.550 00:25:05.550 ' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:05.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.550 --rc genhtml_branch_coverage=1 00:25:05.550 --rc genhtml_function_coverage=1 00:25:05.550 --rc genhtml_legend=1 00:25:05.550 --rc geninfo_all_blocks=1 00:25:05.550 --rc geninfo_unexecuted_blocks=1 00:25:05.550 00:25:05.550 ' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:05.550 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.551 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:07.453 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:25:07.454 00:25:07.454 --- 10.0.0.2 ping statistics --- 00:25:07.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.454 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:25:07.454 00:25:07.454 --- 10.0.0.1 ping statistics --- 00:25:07.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.454 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:07.454 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=1421465 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 1421465 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1421465 ']' 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.455 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.455 [2024-11-02 14:40:59.458139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:07.455 [2024-11-02 14:40:59.458219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.770 [2024-11-02 14:40:59.531163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.770 [2024-11-02 14:40:59.621065] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.770 [2024-11-02 14:40:59.621132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.770 [2024-11-02 14:40:59.621148] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.770 [2024-11-02 14:40:59.621162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.770 [2024-11-02 14:40:59.621174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.770 [2024-11-02 14:40:59.621205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 [2024-11-02 14:40:59.769703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 Malloc0 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.770 [2024-11-02 14:40:59.817840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1421599 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1421600 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1421601 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1421599 00:25:07.770 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.028 [2024-11-02 14:40:59.886896] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.028 [2024-11-02 14:40:59.887300] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.028 [2024-11-02 14:40:59.887625] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.964 Initializing NVMe Controllers 00:25:08.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:08.964 Initialization complete. Launching workers. 00:25:08.964 ======================================================== 00:25:08.964 Latency(us) 00:25:08.964 Device Information : IOPS MiB/s Average min max 00:25:08.964 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40902.84 40752.32 41023.15 00:25:08.964 ======================================================== 00:25:08.964 Total : 25.00 0.10 40902.84 40752.32 41023.15 00:25:08.964 00:25:09.224 Initializing NVMe Controllers 00:25:09.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:09.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:09.224 Initialization complete. Launching workers. 00:25:09.224 ======================================================== 00:25:09.224 Latency(us) 00:25:09.224 Device Information : IOPS MiB/s Average min max 00:25:09.224 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3324.00 12.98 300.33 277.02 492.18 00:25:09.224 ======================================================== 00:25:09.224 Total : 3324.00 12.98 300.33 277.02 492.18 00:25:09.224 00:25:09.224 Initializing NVMe Controllers 00:25:09.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:09.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:09.224 Initialization complete. Launching workers. 00:25:09.224 ======================================================== 00:25:09.224 Latency(us) 00:25:09.224 Device Information : IOPS MiB/s Average min max 00:25:09.224 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3324.00 12.98 300.51 275.42 581.29 00:25:09.224 ======================================================== 00:25:09.224 Total : 3324.00 12.98 300.51 275.42 581.29 00:25:09.224 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1421600 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1421601 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.224 rmmod nvme_tcp 00:25:09.224 rmmod nvme_fabrics 00:25:09.224 rmmod nvme_keyring 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 1421465 ']' 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 1421465 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1421465 ']' 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1421465 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1421465 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1421465' 00:25:09.224 killing process with pid 1421465 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1421465 00:25:09.224 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1421465 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.483 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.016 00:25:12.016 real 0m6.414s 00:25:12.016 user 0m5.838s 00:25:12.016 sys 0m2.656s 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 ************************************ 00:25:12.016 END TEST nvmf_control_msg_list 00:25:12.016 ************************************ 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 ************************************ 00:25:12.016 START TEST nvmf_wait_for_buf 00:25:12.016 ************************************ 00:25:12.016 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:12.016 * Looking for test storage... 00:25:12.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.017 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.018 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.924 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:25:13.925 00:25:13.925 --- 10.0.0.2 ping statistics --- 00:25:13.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.925 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:25:13.925 00:25:13.925 --- 10.0.0.1 ping statistics --- 00:25:13.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.925 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=1423680 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 1423680 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1423680 ']' 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.925 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.925 [2024-11-02 14:41:05.956318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:13.925 [2024-11-02 14:41:05.956414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.183 [2024-11-02 14:41:06.021078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.183 [2024-11-02 14:41:06.104920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.183 [2024-11-02 14:41:06.104975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.183 [2024-11-02 14:41:06.105004] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.183 [2024-11-02 14:41:06.105014] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.183 [2024-11-02 14:41:06.105024] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.183 [2024-11-02 14:41:06.105058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.183 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.183 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:14.183 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:14.183 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.184 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 Malloc0 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 [2024-11-02 14:41:06.315111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.444 [2024-11-02 14:41:06.339332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.444 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.444 [2024-11-02 14:41:06.410408] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:16.350 Initializing NVMe Controllers 00:25:16.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:16.350 Initialization complete. Launching workers. 00:25:16.350 ======================================================== 00:25:16.350 Latency(us) 00:25:16.350 Device Information : IOPS MiB/s Average min max 00:25:16.350 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32260.95 8005.22 63847.79 00:25:16.350 ======================================================== 00:25:16.350 Total : 129.00 16.12 32260.95 8005.22 63847.79 00:25:16.350 00:25:16.350 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:16.350 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:16.350 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.350 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.350 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:16.350 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:16.350 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.351 rmmod nvme_tcp 00:25:16.351 rmmod nvme_fabrics 00:25:16.351 rmmod nvme_keyring 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 1423680 ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1423680 ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423680' 00:25:16.351 killing process with pid 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1423680 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.351 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:18.885 00:25:18.885 real 0m6.841s 00:25:18.885 user 0m3.235s 00:25:18.885 sys 0m2.038s 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.885 ************************************ 00:25:18.885 END TEST nvmf_wait_for_buf 00:25:18.885 ************************************ 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.885 14:41:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:18.885 ************************************ 00:25:18.885 START TEST nvmf_fuzz 00:25:18.886 ************************************ 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:18.886 * Looking for test storage... 00:25:18.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:18.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.886 --rc genhtml_branch_coverage=1 00:25:18.886 --rc genhtml_function_coverage=1 00:25:18.886 --rc genhtml_legend=1 00:25:18.886 --rc geninfo_all_blocks=1 00:25:18.886 --rc geninfo_unexecuted_blocks=1 00:25:18.886 00:25:18.886 ' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:18.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.886 --rc genhtml_branch_coverage=1 00:25:18.886 --rc genhtml_function_coverage=1 00:25:18.886 --rc genhtml_legend=1 00:25:18.886 --rc geninfo_all_blocks=1 00:25:18.886 --rc geninfo_unexecuted_blocks=1 00:25:18.886 00:25:18.886 ' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:18.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.886 --rc genhtml_branch_coverage=1 00:25:18.886 --rc genhtml_function_coverage=1 00:25:18.886 --rc genhtml_legend=1 00:25:18.886 --rc geninfo_all_blocks=1 00:25:18.886 --rc geninfo_unexecuted_blocks=1 00:25:18.886 00:25:18.886 ' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:18.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.886 --rc genhtml_branch_coverage=1 00:25:18.886 --rc genhtml_function_coverage=1 00:25:18.886 --rc genhtml_legend=1 00:25:18.886 --rc geninfo_all_blocks=1 00:25:18.886 --rc geninfo_unexecuted_blocks=1 00:25:18.886 00:25:18.886 ' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:18.886 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.887 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.790 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:20.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:20.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:20.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:20.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:20.791 00:25:20.791 --- 10.0.0.2 ping statistics --- 00:25:20.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.791 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:25:20.791 00:25:20.791 --- 10.0.0.1 ping statistics --- 00:25:20.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.791 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1425898 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1425898 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1425898 ']' 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.791 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 Malloc0 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.051 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:21.311 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:53.399 Fuzzing completed. Shutting down the fuzz application 00:25:53.399 00:25:53.399 Dumping successful admin opcodes: 00:25:53.399 8, 9, 10, 24, 00:25:53.399 Dumping successful io opcodes: 00:25:53.399 0, 9, 00:25:53.399 NS: 0x200003aeff00 I/O qp, Total commands completed: 456930, total successful commands: 2651, random_seed: 1273898688 00:25:53.399 NS: 0x200003aeff00 admin qp, Total commands completed: 55696, total successful commands: 444, random_seed: 2269423104 00:25:53.399 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:53.399 Fuzzing completed. Shutting down the fuzz application 00:25:53.399 00:25:53.399 Dumping successful admin opcodes: 00:25:53.399 24, 00:25:53.399 Dumping successful io opcodes: 00:25:53.399 00:25:53.399 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 109057375 00:25:53.399 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 109172341 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.399 rmmod nvme_tcp 00:25:53.399 rmmod nvme_fabrics 00:25:53.399 rmmod nvme_keyring 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 1425898 ']' 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 1425898 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1425898 ']' 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1425898 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1425898 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1425898' 00:25:53.399 killing process with pid 1425898 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1425898 00:25:53.399 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1425898 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.661 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:55.563 00:25:55.563 real 0m37.115s 00:25:55.563 user 0m51.578s 00:25:55.563 sys 0m14.538s 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:55.563 ************************************ 00:25:55.563 END TEST nvmf_fuzz 00:25:55.563 ************************************ 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:55.563 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:55.823 ************************************ 00:25:55.823 START TEST nvmf_multiconnection 00:25:55.823 ************************************ 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.823 * Looking for test storage... 00:25:55.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:55.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.823 --rc genhtml_branch_coverage=1 00:25:55.823 --rc genhtml_function_coverage=1 00:25:55.823 --rc genhtml_legend=1 00:25:55.823 --rc geninfo_all_blocks=1 00:25:55.823 --rc geninfo_unexecuted_blocks=1 00:25:55.823 00:25:55.823 ' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:55.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.823 --rc genhtml_branch_coverage=1 00:25:55.823 --rc genhtml_function_coverage=1 00:25:55.823 --rc genhtml_legend=1 00:25:55.823 --rc geninfo_all_blocks=1 00:25:55.823 --rc geninfo_unexecuted_blocks=1 00:25:55.823 00:25:55.823 ' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:55.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.823 --rc genhtml_branch_coverage=1 00:25:55.823 --rc genhtml_function_coverage=1 00:25:55.823 --rc genhtml_legend=1 00:25:55.823 --rc geninfo_all_blocks=1 00:25:55.823 --rc geninfo_unexecuted_blocks=1 00:25:55.823 00:25:55.823 ' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:55.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.823 --rc genhtml_branch_coverage=1 00:25:55.823 --rc genhtml_function_coverage=1 00:25:55.823 --rc genhtml_legend=1 00:25:55.823 --rc geninfo_all_blocks=1 00:25:55.823 --rc geninfo_unexecuted_blocks=1 00:25:55.823 00:25:55.823 ' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.823 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.824 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.749 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.750 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:58.009 00:25:58.009 --- 10.0.0.2 ping statistics --- 00:25:58.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.009 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:58.009 00:25:58.009 --- 10.0.0.1 ping statistics --- 00:25:58.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.009 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=1431605 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 1431605 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1431605 ']' 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.009 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.009 [2024-11-02 14:41:49.923149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:58.009 [2024-11-02 14:41:49.923223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.009 [2024-11-02 14:41:49.992236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.267 [2024-11-02 14:41:50.088268] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.267 [2024-11-02 14:41:50.088335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.267 [2024-11-02 14:41:50.088350] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.267 [2024-11-02 14:41:50.088363] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.267 [2024-11-02 14:41:50.088374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.267 [2024-11-02 14:41:50.088443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.267 [2024-11-02 14:41:50.088503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.267 [2024-11-02 14:41:50.088632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.267 [2024-11-02 14:41:50.088635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 [2024-11-02 14:41:50.241632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 Malloc1 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.267 [2024-11-02 14:41:50.297008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.267 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.525 Malloc2 00:25:58.525 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.525 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:58.525 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.525 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 Malloc3 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 Malloc4 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 Malloc5 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 Malloc6 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 Malloc7 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.526 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 Malloc8 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.786 Malloc9 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.786 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 Malloc10 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 Malloc11 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.787 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:59.356 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:59.356 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:59.356 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.357 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:59.357 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.885 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:02.143 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:02.143 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:02.143 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.143 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:02.143 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:04.674 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.675 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:04.934 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:04.934 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.934 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.934 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.934 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.469 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:07.731 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:07.731 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.731 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.731 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.731 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.645 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:10.585 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:10.585 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:10.585 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.585 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:10.585 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:12.490 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:12.490 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:12.490 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:12.490 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:12.490 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.491 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:12.491 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.491 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:13.455 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:13.455 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.455 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.455 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:13.455 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.400 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:15.401 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.401 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:15.968 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:15.968 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.968 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.968 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.968 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:18.754 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:18.754 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:18.754 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.754 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:18.754 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.290 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:21.548 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:21.548 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:21.548 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.548 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:21.548 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.076 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:24.645 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:24.645 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:24.645 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.645 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:24.645 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.543 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:27.478 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:27.478 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:27.478 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.478 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:27.478 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:29.381 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:29.381 [global] 00:26:29.381 thread=1 00:26:29.381 invalidate=1 00:26:29.381 rw=read 00:26:29.381 time_based=1 00:26:29.381 runtime=10 00:26:29.381 ioengine=libaio 00:26:29.381 direct=1 00:26:29.381 bs=262144 00:26:29.381 iodepth=64 00:26:29.381 norandommap=1 00:26:29.381 numjobs=1 00:26:29.381 00:26:29.381 [job0] 00:26:29.381 filename=/dev/nvme0n1 00:26:29.381 [job1] 00:26:29.381 filename=/dev/nvme10n1 00:26:29.381 [job2] 00:26:29.381 filename=/dev/nvme1n1 00:26:29.381 [job3] 00:26:29.381 filename=/dev/nvme2n1 00:26:29.381 [job4] 00:26:29.381 filename=/dev/nvme3n1 00:26:29.381 [job5] 00:26:29.381 filename=/dev/nvme4n1 00:26:29.381 [job6] 00:26:29.381 filename=/dev/nvme5n1 00:26:29.381 [job7] 00:26:29.381 filename=/dev/nvme6n1 00:26:29.381 [job8] 00:26:29.381 filename=/dev/nvme7n1 00:26:29.381 [job9] 00:26:29.381 filename=/dev/nvme8n1 00:26:29.381 [job10] 00:26:29.381 filename=/dev/nvme9n1 00:26:29.639 Could not set queue depth (nvme0n1) 00:26:29.639 Could not set queue depth (nvme10n1) 00:26:29.639 Could not set queue depth (nvme1n1) 00:26:29.639 Could not set queue depth (nvme2n1) 00:26:29.639 Could not set queue depth (nvme3n1) 00:26:29.639 Could not set queue depth (nvme4n1) 00:26:29.639 Could not set queue depth (nvme5n1) 00:26:29.639 Could not set queue depth (nvme6n1) 00:26:29.639 Could not set queue depth (nvme7n1) 00:26:29.639 Could not set queue depth (nvme8n1) 00:26:29.639 Could not set queue depth (nvme9n1) 00:26:29.639 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.639 fio-3.35 00:26:29.639 Starting 11 threads 00:26:41.865 00:26:41.865 job0: (groupid=0, jobs=1): err= 0: pid=1436361: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=566, BW=142MiB/s (148MB/s)(1440MiB/10171msec) 00:26:41.865 slat (usec): min=13, max=500554, avg=1370.68, stdev=10926.24 00:26:41.865 clat (msec): min=19, max=1017, avg=111.51, stdev=173.40 00:26:41.865 lat (msec): min=19, max=1209, avg=112.88, stdev=175.38 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 35], 00:26:41.865 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 40], 00:26:41.865 | 70.00th=[ 43], 80.00th=[ 113], 90.00th=[ 376], 95.00th=[ 527], 00:26:41.865 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 944], 99.95th=[ 953], 00:26:41.865 | 99.99th=[ 1020] 00:26:41.865 bw ( KiB/s): min= 9728, max=445952, per=22.17%, avg=153469.74, stdev=174119.15, samples=19 00:26:41.865 iops : min= 38, max= 1742, avg=599.42, stdev=680.12, samples=19 00:26:41.865 lat (msec) : 20=0.02%, 50=72.86%, 100=6.70%, 250=5.94%, 500=8.87% 00:26:41.865 lat (msec) : 750=3.80%, 1000=1.79%, 2000=0.02% 00:26:41.865 cpu : usr=0.27%, sys=2.01%, ctx=1233, majf=0, minf=4097 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.865 job1: (groupid=0, jobs=1): err= 0: pid=1436362: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=242, BW=60.6MiB/s (63.5MB/s)(608MiB/10026msec) 00:26:41.865 slat (usec): min=9, max=460089, avg=2237.49, stdev=20585.29 00:26:41.865 clat (msec): min=2, max=1483, avg=261.65, stdev=350.33 00:26:41.865 lat (msec): min=2, max=1483, avg=263.88, stdev=353.86 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 29], 00:26:41.865 | 30.00th=[ 45], 40.00th=[ 62], 50.00th=[ 82], 60.00th=[ 110], 00:26:41.865 | 70.00th=[ 241], 80.00th=[ 542], 90.00th=[ 885], 95.00th=[ 1099], 00:26:41.865 | 99.00th=[ 1301], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1452], 00:26:41.865 | 99.99th=[ 1485] 00:26:41.865 bw ( KiB/s): min= 1021, max=224256, per=8.75%, avg=60570.95, stdev=63844.14, samples=20 00:26:41.865 iops : min= 3, max= 876, avg=236.50, stdev=249.41, samples=20 00:26:41.865 lat (msec) : 4=0.08%, 10=1.60%, 20=12.72%, 50=19.34%, 100=24.36% 00:26:41.865 lat (msec) : 250=12.47%, 500=8.56%, 750=6.71%, 1000=6.75%, 2000=7.41% 00:26:41.865 cpu : usr=0.11%, sys=0.92%, ctx=566, majf=0, minf=4097 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.865 job2: (groupid=0, jobs=1): err= 0: pid=1436363: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=152, BW=38.1MiB/s (39.9MB/s)(388MiB/10174msec) 00:26:41.865 slat (usec): min=9, max=270402, avg=4716.94, stdev=20544.78 00:26:41.865 clat (msec): min=2, max=1102, avg=414.88, stdev=215.72 00:26:41.865 lat (msec): min=3, max=1218, avg=419.59, stdev=217.51 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 14], 5.00th=[ 109], 10.00th=[ 148], 20.00th=[ 188], 00:26:41.865 | 30.00th=[ 347], 40.00th=[ 376], 50.00th=[ 397], 60.00th=[ 414], 00:26:41.865 | 70.00th=[ 510], 80.00th=[ 575], 90.00th=[ 709], 95.00th=[ 844], 00:26:41.865 | 99.00th=[ 953], 99.50th=[ 1099], 99.90th=[ 1099], 99.95th=[ 1099], 00:26:41.865 | 99.99th=[ 1099] 00:26:41.865 bw ( KiB/s): min=13312, max=78848, per=5.50%, avg=38034.65, stdev=18098.06, samples=20 00:26:41.865 iops : min= 52, max= 308, avg=148.50, stdev=70.71, samples=20 00:26:41.865 lat (msec) : 4=0.13%, 10=0.65%, 20=0.39%, 50=3.35%, 100=0.26% 00:26:41.865 lat (msec) : 250=18.00%, 500=46.32%, 750=22.90%, 1000=7.23%, 2000=0.77% 00:26:41.865 cpu : usr=0.07%, sys=0.51%, ctx=442, majf=0, minf=4097 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.865 job3: (groupid=0, jobs=1): err= 0: pid=1436367: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=164, BW=41.2MiB/s (43.2MB/s)(417MiB/10111msec) 00:26:41.865 slat (usec): min=8, max=344070, avg=3554.90, stdev=24049.39 00:26:41.865 clat (usec): min=1110, max=1225.5k, avg=384074.00, stdev=334073.69 00:26:41.865 lat (usec): min=1146, max=1225.5k, avg=387628.89, stdev=338173.00 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 25], 20.00th=[ 54], 00:26:41.865 | 30.00th=[ 107], 40.00th=[ 144], 50.00th=[ 313], 60.00th=[ 481], 00:26:41.865 | 70.00th=[ 609], 80.00th=[ 735], 90.00th=[ 877], 95.00th=[ 936], 00:26:41.865 | 99.00th=[ 1183], 99.50th=[ 1234], 99.90th=[ 1234], 99.95th=[ 1234], 00:26:41.865 | 99.99th=[ 1234] 00:26:41.865 bw ( KiB/s): min= 6656, max=201728, per=5.94%, avg=41082.05, stdev=49580.14, samples=20 00:26:41.865 iops : min= 26, max= 788, avg=160.40, stdev=193.69, samples=20 00:26:41.865 lat (msec) : 2=0.06%, 4=0.36%, 10=4.92%, 20=2.70%, 50=10.85% 00:26:41.865 lat (msec) : 100=9.65%, 250=19.84%, 500=12.71%, 750=19.24%, 1000=16.85% 00:26:41.865 lat (msec) : 2000=2.82% 00:26:41.865 cpu : usr=0.06%, sys=0.59%, ctx=379, majf=0, minf=4097 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.865 job4: (groupid=0, jobs=1): err= 0: pid=1436371: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=138, BW=34.7MiB/s (36.4MB/s)(353MiB/10171msec) 00:26:41.865 slat (usec): min=9, max=347506, avg=3392.90, stdev=21167.58 00:26:41.865 clat (usec): min=1894, max=1681.5k, avg=456882.40, stdev=344090.76 00:26:41.865 lat (usec): min=1919, max=1681.5k, avg=460275.29, stdev=346371.35 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 22], 5.00th=[ 45], 10.00th=[ 73], 20.00th=[ 161], 00:26:41.865 | 30.00th=[ 234], 40.00th=[ 296], 50.00th=[ 376], 60.00th=[ 493], 00:26:41.865 | 70.00th=[ 575], 80.00th=[ 693], 90.00th=[ 978], 95.00th=[ 1116], 00:26:41.865 | 99.00th=[ 1569], 99.50th=[ 1569], 99.90th=[ 1620], 99.95th=[ 1687], 00:26:41.865 | 99.99th=[ 1687] 00:26:41.865 bw ( KiB/s): min= 1021, max=87040, per=4.99%, avg=34528.80, stdev=24314.50, samples=20 00:26:41.865 iops : min= 3, max= 340, avg=134.80, stdev=95.03, samples=20 00:26:41.865 lat (msec) : 2=0.07%, 4=0.21%, 10=0.14%, 20=0.50%, 50=4.32% 00:26:41.865 lat (msec) : 100=10.33%, 250=16.63%, 500=30.15%, 750=19.75%, 1000=8.49% 00:26:41.865 lat (msec) : 2000=9.41% 00:26:41.865 cpu : usr=0.08%, sys=0.44%, ctx=250, majf=0, minf=3721 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=1413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.865 job5: (groupid=0, jobs=1): err= 0: pid=1436393: Sat Nov 2 14:42:32 2024 00:26:41.865 read: IOPS=186, BW=46.6MiB/s (48.9MB/s)(475MiB/10175msec) 00:26:41.865 slat (usec): min=9, max=581546, avg=3539.46, stdev=21696.07 00:26:41.865 clat (usec): min=1569, max=1438.0k, avg=339130.12, stdev=312875.81 00:26:41.865 lat (usec): min=1593, max=1438.1k, avg=342669.57, stdev=315990.24 00:26:41.865 clat percentiles (msec): 00:26:41.865 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 21], 00:26:41.865 | 30.00th=[ 28], 40.00th=[ 313], 50.00th=[ 359], 60.00th=[ 380], 00:26:41.865 | 70.00th=[ 422], 80.00th=[ 535], 90.00th=[ 735], 95.00th=[ 986], 00:26:41.865 | 99.00th=[ 1318], 99.50th=[ 1334], 99.90th=[ 1435], 99.95th=[ 1435], 00:26:41.865 | 99.99th=[ 1435] 00:26:41.865 bw ( KiB/s): min=11264, max=216064, per=6.78%, avg=46942.70, stdev=42236.41, samples=20 00:26:41.865 iops : min= 44, max= 844, avg=183.30, stdev=165.00, samples=20 00:26:41.865 lat (msec) : 2=0.26%, 4=1.63%, 10=5.64%, 20=11.33%, 50=15.70% 00:26:41.865 lat (msec) : 100=0.58%, 250=3.32%, 500=39.09%, 750=12.75%, 1000=5.11% 00:26:41.865 lat (msec) : 2000=4.58% 00:26:41.865 cpu : usr=0.11%, sys=0.88%, ctx=876, majf=0, minf=4097 00:26:41.865 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:41.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.865 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.865 issued rwts: total=1898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 job6: (groupid=0, jobs=1): err= 0: pid=1436404: Sat Nov 2 14:42:32 2024 00:26:41.866 read: IOPS=130, BW=32.6MiB/s (34.2MB/s)(328MiB/10076msec) 00:26:41.866 slat (usec): min=8, max=538396, avg=7166.64, stdev=35965.29 00:26:41.866 clat (msec): min=2, max=2057, avg=483.63, stdev=484.96 00:26:41.866 lat (msec): min=2, max=2057, avg=490.80, stdev=491.86 00:26:41.866 clat percentiles (msec): 00:26:41.866 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 74], 00:26:41.866 | 30.00th=[ 114], 40.00th=[ 197], 50.00th=[ 351], 60.00th=[ 435], 00:26:41.866 | 70.00th=[ 506], 80.00th=[ 1003], 90.00th=[ 1301], 95.00th=[ 1519], 00:26:41.866 | 99.00th=[ 1703], 99.50th=[ 1787], 99.90th=[ 1905], 99.95th=[ 2056], 00:26:41.866 | 99.99th=[ 2056] 00:26:41.866 bw ( KiB/s): min= 3072, max=165888, per=4.86%, avg=33645.95, stdev=39055.61, samples=19 00:26:41.866 iops : min= 12, max= 648, avg=131.37, stdev=152.56, samples=19 00:26:41.866 lat (msec) : 4=0.46%, 10=3.05%, 20=2.74%, 50=8.61%, 100=10.97% 00:26:41.866 lat (msec) : 250=16.98%, 500=25.29%, 750=7.46%, 1000=4.42%, 2000=19.95% 00:26:41.866 lat (msec) : >=2000=0.08% 00:26:41.866 cpu : usr=0.03%, sys=0.47%, ctx=196, majf=0, minf=4097 00:26:41.866 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:26:41.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.866 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.866 issued rwts: total=1313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 job7: (groupid=0, jobs=1): err= 0: pid=1436417: Sat Nov 2 14:42:32 2024 00:26:41.866 read: IOPS=578, BW=145MiB/s (152MB/s)(1473MiB/10182msec) 00:26:41.866 slat (usec): min=9, max=519906, avg=1422.45, stdev=9753.57 00:26:41.866 clat (msec): min=12, max=841, avg=109.09, stdev=124.24 00:26:41.866 lat (msec): min=12, max=841, avg=110.52, stdev=125.49 00:26:41.866 clat percentiles (msec): 00:26:41.866 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 41], 00:26:41.866 | 30.00th=[ 43], 40.00th=[ 46], 50.00th=[ 53], 60.00th=[ 56], 00:26:41.866 | 70.00th=[ 66], 80.00th=[ 178], 90.00th=[ 296], 95.00th=[ 376], 00:26:41.866 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 718], 99.95th=[ 802], 00:26:41.866 | 99.99th=[ 844] 00:26:41.866 bw ( KiB/s): min=32256, max=408064, per=21.55%, avg=149158.10, stdev=133544.27, samples=20 00:26:41.866 iops : min= 126, max= 1594, avg=582.60, stdev=521.64, samples=20 00:26:41.866 lat (msec) : 20=0.46%, 50=43.26%, 100=30.67%, 250=11.15%, 500=12.37% 00:26:41.866 lat (msec) : 750=2.00%, 1000=0.08% 00:26:41.866 cpu : usr=0.32%, sys=1.74%, ctx=797, majf=0, minf=4097 00:26:41.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:41.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.866 issued rwts: total=5892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 job8: (groupid=0, jobs=1): err= 0: pid=1436470: Sat Nov 2 14:42:32 2024 00:26:41.866 read: IOPS=141, BW=35.4MiB/s (37.1MB/s)(355MiB/10019msec) 00:26:41.866 slat (usec): min=13, max=332595, avg=7042.17, stdev=32506.97 00:26:41.866 clat (msec): min=17, max=1416, avg=444.37, stdev=400.73 00:26:41.866 lat (msec): min=21, max=1416, avg=451.41, stdev=407.01 00:26:41.866 clat percentiles (msec): 00:26:41.866 | 1.00th=[ 27], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 56], 00:26:41.866 | 30.00th=[ 78], 40.00th=[ 163], 50.00th=[ 351], 60.00th=[ 481], 00:26:41.866 | 70.00th=[ 693], 80.00th=[ 894], 90.00th=[ 1099], 95.00th=[ 1167], 00:26:41.866 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1418], 99.95th=[ 1418], 00:26:41.866 | 99.99th=[ 1418] 00:26:41.866 bw ( KiB/s): min=11264, max=153600, per=5.02%, avg=34713.85, stdev=41536.79, samples=20 00:26:41.866 iops : min= 44, max= 600, avg=135.50, stdev=162.30, samples=20 00:26:41.866 lat (msec) : 20=0.07%, 50=15.57%, 100=17.76%, 250=11.35%, 500=18.60% 00:26:41.866 lat (msec) : 750=10.22%, 1000=12.97%, 2000=13.46% 00:26:41.866 cpu : usr=0.08%, sys=0.61%, ctx=189, majf=0, minf=4097 00:26:41.866 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:26:41.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.866 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.866 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 job9: (groupid=0, jobs=1): err= 0: pid=1436497: Sat Nov 2 14:42:32 2024 00:26:41.866 read: IOPS=289, BW=72.5MiB/s (76.0MB/s)(738MiB/10180msec) 00:26:41.866 slat (usec): min=11, max=518641, avg=3069.80, stdev=18271.98 00:26:41.866 clat (msec): min=2, max=1089, avg=217.47, stdev=191.52 00:26:41.866 lat (msec): min=2, max=1233, avg=220.54, stdev=194.05 00:26:41.866 clat percentiles (msec): 00:26:41.866 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 38], 20.00th=[ 65], 00:26:41.866 | 30.00th=[ 87], 40.00th=[ 101], 50.00th=[ 140], 60.00th=[ 209], 00:26:41.866 | 70.00th=[ 321], 80.00th=[ 376], 90.00th=[ 430], 95.00th=[ 575], 00:26:41.866 | 99.00th=[ 927], 99.50th=[ 961], 99.90th=[ 1053], 99.95th=[ 1053], 00:26:41.866 | 99.99th=[ 1083] 00:26:41.866 bw ( KiB/s): min= 9216, max=248832, per=10.68%, avg=73923.00, stdev=66151.51, samples=20 00:26:41.866 iops : min= 36, max= 972, avg=288.70, stdev=258.43, samples=20 00:26:41.866 lat (msec) : 4=0.07%, 10=0.75%, 20=1.49%, 50=15.14%, 100=22.02% 00:26:41.866 lat (msec) : 250=24.32%, 500=30.35%, 750=3.52%, 1000=2.03%, 2000=0.30% 00:26:41.866 cpu : usr=0.18%, sys=0.87%, ctx=580, majf=0, minf=4097 00:26:41.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:41.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.866 issued rwts: total=2952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 job10: (groupid=0, jobs=1): err= 0: pid=1436513: Sat Nov 2 14:42:32 2024 00:26:41.866 read: IOPS=121, BW=30.3MiB/s (31.8MB/s)(309MiB/10177msec) 00:26:41.866 slat (usec): min=8, max=486533, avg=7253.29, stdev=36525.02 00:26:41.866 clat (msec): min=25, max=1677, avg=520.12, stdev=477.88 00:26:41.866 lat (msec): min=25, max=1677, avg=527.37, stdev=484.26 00:26:41.866 clat percentiles (msec): 00:26:41.866 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 50], 00:26:41.866 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 493], 60.00th=[ 760], 00:26:41.866 | 70.00th=[ 885], 80.00th=[ 1003], 90.00th=[ 1183], 95.00th=[ 1284], 00:26:41.866 | 99.00th=[ 1502], 99.50th=[ 1552], 99.90th=[ 1670], 99.95th=[ 1670], 00:26:41.866 | 99.99th=[ 1670] 00:26:41.866 bw ( KiB/s): min= 4096, max=283136, per=4.33%, avg=29949.00, stdev=60168.84, samples=20 00:26:41.866 iops : min= 16, max= 1106, avg=116.90, stdev=235.06, samples=20 00:26:41.866 lat (msec) : 50=21.56%, 100=23.26%, 250=0.65%, 500=4.62%, 750=9.32% 00:26:41.866 lat (msec) : 1000=21.31%, 2000=19.29% 00:26:41.866 cpu : usr=0.08%, sys=0.38%, ctx=188, majf=0, minf=4098 00:26:41.866 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:26:41.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.866 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.866 issued rwts: total=1234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.866 00:26:41.866 Run status group 0 (all jobs): 00:26:41.866 READ: bw=676MiB/s (709MB/s), 30.3MiB/s-145MiB/s (31.8MB/s-152MB/s), io=6882MiB (7217MB), run=10019-10182msec 00:26:41.866 00:26:41.866 Disk stats (read/write): 00:26:41.866 nvme0n1: ios=11332/0, merge=0/0, ticks=1208789/0, in_queue=1208789, util=97.09% 00:26:41.866 nvme10n1: ios=4612/0, merge=0/0, ticks=1243736/0, in_queue=1243736, util=97.29% 00:26:41.866 nvme1n1: ios=2973/0, merge=0/0, ticks=1190231/0, in_queue=1190231, util=97.55% 00:26:41.866 nvme2n1: ios=3123/0, merge=0/0, ticks=1240733/0, in_queue=1240733, util=97.69% 00:26:41.866 nvme3n1: ios=2698/0, merge=0/0, ticks=1161937/0, in_queue=1161937, util=97.78% 00:26:41.866 nvme4n1: ios=3669/0, merge=0/0, ticks=1193280/0, in_queue=1193280, util=98.12% 00:26:41.866 nvme5n1: ios=2384/0, merge=0/0, ticks=1240720/0, in_queue=1240720, util=98.28% 00:26:41.866 nvme6n1: ios=11694/0, merge=0/0, ticks=1227259/0, in_queue=1227259, util=98.47% 00:26:41.866 nvme7n1: ios=2483/0, merge=0/0, ticks=1238679/0, in_queue=1238679, util=98.87% 00:26:41.866 nvme8n1: ios=5838/0, merge=0/0, ticks=1248723/0, in_queue=1248723, util=99.11% 00:26:41.866 nvme9n1: ios=2466/0, merge=0/0, ticks=1268103/0, in_queue=1268103, util=99.27% 00:26:41.866 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:41.866 [global] 00:26:41.866 thread=1 00:26:41.866 invalidate=1 00:26:41.866 rw=randwrite 00:26:41.866 time_based=1 00:26:41.866 runtime=10 00:26:41.866 ioengine=libaio 00:26:41.866 direct=1 00:26:41.866 bs=262144 00:26:41.866 iodepth=64 00:26:41.866 norandommap=1 00:26:41.866 numjobs=1 00:26:41.866 00:26:41.866 [job0] 00:26:41.866 filename=/dev/nvme0n1 00:26:41.866 [job1] 00:26:41.866 filename=/dev/nvme10n1 00:26:41.866 [job2] 00:26:41.866 filename=/dev/nvme1n1 00:26:41.866 [job3] 00:26:41.866 filename=/dev/nvme2n1 00:26:41.866 [job4] 00:26:41.866 filename=/dev/nvme3n1 00:26:41.866 [job5] 00:26:41.866 filename=/dev/nvme4n1 00:26:41.866 [job6] 00:26:41.866 filename=/dev/nvme5n1 00:26:41.866 [job7] 00:26:41.866 filename=/dev/nvme6n1 00:26:41.866 [job8] 00:26:41.866 filename=/dev/nvme7n1 00:26:41.866 [job9] 00:26:41.866 filename=/dev/nvme8n1 00:26:41.866 [job10] 00:26:41.866 filename=/dev/nvme9n1 00:26:41.866 Could not set queue depth (nvme0n1) 00:26:41.867 Could not set queue depth (nvme10n1) 00:26:41.867 Could not set queue depth (nvme1n1) 00:26:41.867 Could not set queue depth (nvme2n1) 00:26:41.867 Could not set queue depth (nvme3n1) 00:26:41.867 Could not set queue depth (nvme4n1) 00:26:41.867 Could not set queue depth (nvme5n1) 00:26:41.867 Could not set queue depth (nvme6n1) 00:26:41.867 Could not set queue depth (nvme7n1) 00:26:41.867 Could not set queue depth (nvme8n1) 00:26:41.867 Could not set queue depth (nvme9n1) 00:26:41.867 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.867 fio-3.35 00:26:41.867 Starting 11 threads 00:26:51.844 00:26:51.844 job0: (groupid=0, jobs=1): err= 0: pid=1437105: Sat Nov 2 14:42:43 2024 00:26:51.844 write: IOPS=216, BW=54.1MiB/s (56.7MB/s)(552MiB/10201msec); 0 zone resets 00:26:51.844 slat (usec): min=24, max=240396, avg=3852.18, stdev=11110.02 00:26:51.844 clat (msec): min=16, max=810, avg=291.77, stdev=158.72 00:26:51.844 lat (msec): min=16, max=810, avg=295.62, stdev=160.50 00:26:51.844 clat percentiles (msec): 00:26:51.844 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 104], 20.00th=[ 142], 00:26:51.844 | 30.00th=[ 178], 40.00th=[ 213], 50.00th=[ 284], 60.00th=[ 326], 00:26:51.844 | 70.00th=[ 388], 80.00th=[ 451], 90.00th=[ 514], 95.00th=[ 558], 00:26:51.844 | 99.00th=[ 676], 99.50th=[ 735], 99.90th=[ 810], 99.95th=[ 810], 00:26:51.844 | 99.99th=[ 810] 00:26:51.844 bw ( KiB/s): min=26624, max=119808, per=7.04%, avg=54855.30, stdev=27300.54, samples=20 00:26:51.844 iops : min= 104, max= 468, avg=214.20, stdev=106.69, samples=20 00:26:51.844 lat (msec) : 20=0.27%, 50=1.86%, 100=7.07%, 250=36.63%, 500=42.93% 00:26:51.844 lat (msec) : 750=10.74%, 1000=0.50% 00:26:51.844 cpu : usr=0.72%, sys=0.80%, ctx=897, majf=0, minf=1 00:26:51.844 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.844 issued rwts: total=0,2206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.844 job1: (groupid=0, jobs=1): err= 0: pid=1437118: Sat Nov 2 14:42:43 2024 00:26:51.844 write: IOPS=333, BW=83.4MiB/s (87.5MB/s)(845MiB/10131msec); 0 zone resets 00:26:51.844 slat (usec): min=20, max=120811, avg=1760.14, stdev=6037.58 00:26:51.844 clat (msec): min=5, max=729, avg=189.87, stdev=126.66 00:26:51.844 lat (msec): min=5, max=729, avg=191.63, stdev=127.79 00:26:51.844 clat percentiles (msec): 00:26:51.844 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 67], 20.00th=[ 82], 00:26:51.844 | 30.00th=[ 115], 40.00th=[ 138], 50.00th=[ 159], 60.00th=[ 184], 00:26:51.844 | 70.00th=[ 218], 80.00th=[ 279], 90.00th=[ 368], 95.00th=[ 456], 00:26:51.844 | 99.00th=[ 600], 99.50th=[ 634], 99.90th=[ 709], 99.95th=[ 718], 00:26:51.844 | 99.99th=[ 726] 00:26:51.844 bw ( KiB/s): min=26677, max=162816, per=10.90%, avg=84903.10, stdev=37800.40, samples=20 00:26:51.844 iops : min= 104, max= 636, avg=331.60, stdev=147.71, samples=20 00:26:51.844 lat (msec) : 10=0.38%, 20=1.12%, 50=4.67%, 100=19.53%, 250=50.74% 00:26:51.844 lat (msec) : 500=20.36%, 750=3.20% 00:26:51.844 cpu : usr=1.03%, sys=1.29%, ctx=2054, majf=0, minf=1 00:26:51.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.844 issued rwts: total=0,3380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.844 job2: (groupid=0, jobs=1): err= 0: pid=1437119: Sat Nov 2 14:42:43 2024 00:26:51.844 write: IOPS=426, BW=107MiB/s (112MB/s)(1089MiB/10198msec); 0 zone resets 00:26:51.844 slat (usec): min=20, max=92298, avg=1341.45, stdev=4210.81 00:26:51.844 clat (usec): min=915, max=557314, avg=148422.60, stdev=125249.79 00:26:51.844 lat (usec): min=949, max=557519, avg=149764.06, stdev=125944.33 00:26:51.844 clat percentiles (msec): 00:26:51.844 | 1.00th=[ 4], 5.00th=[ 38], 10.00th=[ 53], 20.00th=[ 64], 00:26:51.844 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 87], 60.00th=[ 114], 00:26:51.844 | 70.00th=[ 165], 80.00th=[ 245], 90.00th=[ 363], 95.00th=[ 430], 00:26:51.844 | 99.00th=[ 510], 99.50th=[ 531], 99.90th=[ 550], 99.95th=[ 550], 00:26:51.844 | 99.99th=[ 558] 00:26:51.844 bw ( KiB/s): min=32768, max=238592, per=14.10%, avg=109841.10, stdev=71033.04, samples=20 00:26:51.844 iops : min= 128, max= 932, avg=429.00, stdev=277.54, samples=20 00:26:51.844 lat (usec) : 1000=0.02% 00:26:51.844 lat (msec) : 2=0.41%, 4=1.10%, 10=0.78%, 20=0.64%, 50=5.58% 00:26:51.844 lat (msec) : 100=46.37%, 250=25.65%, 500=18.05%, 750=1.38% 00:26:51.844 cpu : usr=1.29%, sys=1.43%, ctx=2071, majf=0, minf=1 00:26:51.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.844 issued rwts: total=0,4354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.844 job3: (groupid=0, jobs=1): err= 0: pid=1437120: Sat Nov 2 14:42:43 2024 00:26:51.844 write: IOPS=308, BW=77.2MiB/s (80.9MB/s)(782MiB/10139msec); 0 zone resets 00:26:51.844 slat (usec): min=24, max=174949, avg=2184.40, stdev=8755.27 00:26:51.844 clat (msec): min=8, max=867, avg=204.98, stdev=198.51 00:26:51.844 lat (msec): min=8, max=873, avg=207.17, stdev=200.77 00:26:51.844 clat percentiles (msec): 00:26:51.844 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 54], 00:26:51.844 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 107], 60.00th=[ 184], 00:26:51.844 | 70.00th=[ 255], 80.00th=[ 409], 90.00th=[ 493], 95.00th=[ 617], 00:26:51.844 | 99.00th=[ 760], 99.50th=[ 810], 99.90th=[ 852], 99.95th=[ 860], 00:26:51.844 | 99.99th=[ 869] 00:26:51.844 bw ( KiB/s): min=12288, max=315904, per=10.07%, avg=78456.20, stdev=78600.28, samples=20 00:26:51.844 iops : min= 48, max= 1234, avg=306.35, stdev=306.88, samples=20 00:26:51.844 lat (msec) : 10=0.03%, 20=0.32%, 50=17.16%, 100=31.64%, 250=20.29% 00:26:51.844 lat (msec) : 500=21.57%, 750=7.57%, 1000=1.41% 00:26:51.844 cpu : usr=1.05%, sys=1.11%, ctx=1752, majf=0, minf=1 00:26:51.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,3129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job4: (groupid=0, jobs=1): err= 0: pid=1437121: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=328, BW=82.1MiB/s (86.1MB/s)(834MiB/10157msec); 0 zone resets 00:26:51.845 slat (usec): min=15, max=152472, avg=2181.49, stdev=6767.06 00:26:51.845 clat (usec): min=884, max=717745, avg=192541.66, stdev=150055.18 00:26:51.845 lat (usec): min=905, max=725091, avg=194723.15, stdev=151878.90 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 39], 20.00th=[ 79], 00:26:51.845 | 30.00th=[ 95], 40.00th=[ 126], 50.00th=[ 155], 60.00th=[ 184], 00:26:51.845 | 70.00th=[ 236], 80.00th=[ 284], 90.00th=[ 418], 95.00th=[ 514], 00:26:51.845 | 99.00th=[ 667], 99.50th=[ 684], 99.90th=[ 709], 99.95th=[ 709], 00:26:51.845 | 99.99th=[ 718] 00:26:51.845 bw ( KiB/s): min=26624, max=204697, per=10.75%, avg=83797.55, stdev=47386.31, samples=20 00:26:51.845 iops : min= 104, max= 799, avg=327.25, stdev=185.06, samples=20 00:26:51.845 lat (usec) : 1000=0.12% 00:26:51.845 lat (msec) : 2=0.75%, 4=0.78%, 10=3.30%, 20=1.80%, 50=5.57% 00:26:51.845 lat (msec) : 100=20.05%, 250=39.80%, 500=21.82%, 750=6.02% 00:26:51.845 cpu : usr=1.12%, sys=1.05%, ctx=1756, majf=0, minf=2 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,3337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job5: (groupid=0, jobs=1): err= 0: pid=1437122: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=257, BW=64.5MiB/s (67.6MB/s)(654MiB/10136msec); 0 zone resets 00:26:51.845 slat (usec): min=18, max=119103, avg=2425.18, stdev=7922.64 00:26:51.845 clat (usec): min=1720, max=818104, avg=244717.96, stdev=163849.60 00:26:51.845 lat (usec): min=1808, max=826408, avg=247143.14, stdev=165729.36 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 64], 20.00th=[ 89], 00:26:51.845 | 30.00th=[ 127], 40.00th=[ 167], 50.00th=[ 218], 60.00th=[ 268], 00:26:51.845 | 70.00th=[ 330], 80.00th=[ 397], 90.00th=[ 472], 95.00th=[ 550], 00:26:51.845 | 99.00th=[ 718], 99.50th=[ 776], 99.90th=[ 810], 99.95th=[ 810], 00:26:51.845 | 99.99th=[ 818] 00:26:51.845 bw ( KiB/s): min=19968, max=131072, per=8.38%, avg=65291.75, stdev=33036.20, samples=20 00:26:51.845 iops : min= 78, max= 512, avg=255.00, stdev=129.07, samples=20 00:26:51.845 lat (msec) : 2=0.04%, 4=0.54%, 10=1.38%, 20=0.69%, 50=4.44% 00:26:51.845 lat (msec) : 100=16.30%, 250=32.17%, 500=36.61%, 750=6.92%, 1000=0.92% 00:26:51.845 cpu : usr=0.82%, sys=0.92%, ctx=1529, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,2614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job6: (groupid=0, jobs=1): err= 0: pid=1437123: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=230, BW=57.7MiB/s (60.5MB/s)(586MiB/10156msec); 0 zone resets 00:26:51.845 slat (usec): min=22, max=155391, avg=3283.76, stdev=9640.02 00:26:51.845 clat (msec): min=2, max=746, avg=272.63, stdev=177.50 00:26:51.845 lat (msec): min=2, max=746, avg=275.91, stdev=179.90 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 63], 20.00th=[ 126], 00:26:51.845 | 30.00th=[ 153], 40.00th=[ 184], 50.00th=[ 215], 60.00th=[ 292], 00:26:51.845 | 70.00th=[ 384], 80.00th=[ 439], 90.00th=[ 514], 95.00th=[ 600], 00:26:51.845 | 99.00th=[ 726], 99.50th=[ 735], 99.90th=[ 743], 99.95th=[ 743], 00:26:51.845 | 99.99th=[ 743] 00:26:51.845 bw ( KiB/s): min=27648, max=102400, per=7.50%, avg=58413.10, stdev=26046.47, samples=20 00:26:51.845 iops : min= 108, max= 400, avg=228.10, stdev=101.76, samples=20 00:26:51.845 lat (msec) : 4=0.17%, 10=2.81%, 20=1.83%, 50=3.97%, 100=5.33% 00:26:51.845 lat (msec) : 250=40.04%, 500=34.50%, 750=11.34% 00:26:51.845 cpu : usr=0.78%, sys=0.82%, ctx=1192, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,2345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job7: (groupid=0, jobs=1): err= 0: pid=1437124: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=201, BW=50.3MiB/s (52.8MB/s)(513MiB/10200msec); 0 zone resets 00:26:51.845 slat (usec): min=25, max=167734, avg=4118.44, stdev=11141.44 00:26:51.845 clat (usec): min=1273, max=743320, avg=313517.78, stdev=181557.21 00:26:51.845 lat (usec): min=1314, max=786382, avg=317636.22, stdev=184136.60 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 45], 20.00th=[ 153], 00:26:51.845 | 30.00th=[ 194], 40.00th=[ 266], 50.00th=[ 326], 60.00th=[ 372], 00:26:51.845 | 70.00th=[ 430], 80.00th=[ 472], 90.00th=[ 542], 95.00th=[ 625], 00:26:51.845 | 99.00th=[ 726], 99.50th=[ 735], 99.90th=[ 743], 99.95th=[ 743], 00:26:51.845 | 99.99th=[ 743] 00:26:51.845 bw ( KiB/s): min=22528, max=152064, per=6.54%, avg=50937.15, stdev=31513.23, samples=20 00:26:51.845 iops : min= 88, max= 594, avg=198.90, stdev=123.12, samples=20 00:26:51.845 lat (msec) : 2=0.19%, 4=0.39%, 10=2.09%, 20=2.24%, 50=5.55% 00:26:51.845 lat (msec) : 100=5.60%, 250=22.07%, 500=47.88%, 750=13.98% 00:26:51.845 cpu : usr=0.61%, sys=0.84%, ctx=951, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,2053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job8: (groupid=0, jobs=1): err= 0: pid=1437125: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=177, BW=44.5MiB/s (46.6MB/s)(454MiB/10197msec); 0 zone resets 00:26:51.845 slat (usec): min=25, max=409372, avg=4405.47, stdev=17055.85 00:26:51.845 clat (msec): min=12, max=872, avg=355.15, stdev=181.09 00:26:51.845 lat (msec): min=12, max=872, avg=359.55, stdev=182.94 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 33], 5.00th=[ 64], 10.00th=[ 117], 20.00th=[ 201], 00:26:51.845 | 30.00th=[ 249], 40.00th=[ 296], 50.00th=[ 347], 60.00th=[ 405], 00:26:51.845 | 70.00th=[ 447], 80.00th=[ 485], 90.00th=[ 575], 95.00th=[ 659], 00:26:51.845 | 99.00th=[ 852], 99.50th=[ 860], 99.90th=[ 869], 99.95th=[ 877], 00:26:51.845 | 99.99th=[ 877] 00:26:51.845 bw ( KiB/s): min=14336, max=80896, per=5.75%, avg=44790.20, stdev=18763.89, samples=20 00:26:51.845 iops : min= 56, max= 316, avg=174.90, stdev=73.28, samples=20 00:26:51.845 lat (msec) : 20=0.44%, 50=2.37%, 100=6.39%, 250=21.11%, 500=53.53% 00:26:51.845 lat (msec) : 750=12.24%, 1000=3.91% 00:26:51.845 cpu : usr=0.51%, sys=0.63%, ctx=794, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,1814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job9: (groupid=0, jobs=1): err= 0: pid=1437126: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=260, BW=65.1MiB/s (68.3MB/s)(660MiB/10137msec); 0 zone resets 00:26:51.845 slat (usec): min=24, max=100054, avg=2989.80, stdev=8473.31 00:26:51.845 clat (usec): min=1870, max=739012, avg=242402.16, stdev=168376.59 00:26:51.845 lat (usec): min=1915, max=739055, avg=245391.97, stdev=170674.39 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 4], 5.00th=[ 28], 10.00th=[ 43], 20.00th=[ 99], 00:26:51.845 | 30.00th=[ 148], 40.00th=[ 174], 50.00th=[ 197], 60.00th=[ 239], 00:26:51.845 | 70.00th=[ 309], 80.00th=[ 388], 90.00th=[ 477], 95.00th=[ 575], 00:26:51.845 | 99.00th=[ 718], 99.50th=[ 726], 99.90th=[ 743], 99.95th=[ 743], 00:26:51.845 | 99.99th=[ 743] 00:26:51.845 bw ( KiB/s): min=22528, max=125440, per=8.47%, avg=65978.15, stdev=33804.25, samples=20 00:26:51.845 iops : min= 88, max= 490, avg=257.70, stdev=132.04, samples=20 00:26:51.845 lat (msec) : 2=0.04%, 4=1.29%, 10=1.21%, 20=1.33%, 50=8.37% 00:26:51.845 lat (msec) : 100=8.03%, 250=42.11%, 500=29.50%, 750=8.14% 00:26:51.845 cpu : usr=0.89%, sys=0.95%, ctx=1313, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,2641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 job10: (groupid=0, jobs=1): err= 0: pid=1437127: Sat Nov 2 14:42:43 2024 00:26:51.845 write: IOPS=313, BW=78.4MiB/s (82.2MB/s)(795MiB/10137msec); 0 zone resets 00:26:51.845 slat (usec): min=17, max=307967, avg=1777.93, stdev=10135.37 00:26:51.845 clat (usec): min=1110, max=797587, avg=201655.15, stdev=194691.44 00:26:51.845 lat (usec): min=1148, max=797638, avg=203433.08, stdev=196461.77 00:26:51.845 clat percentiles (msec): 00:26:51.845 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 14], 00:26:51.845 | 30.00th=[ 32], 40.00th=[ 94], 50.00th=[ 163], 60.00th=[ 218], 00:26:51.845 | 70.00th=[ 288], 80.00th=[ 372], 90.00th=[ 489], 95.00th=[ 592], 00:26:51.845 | 99.00th=[ 735], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 793], 00:26:51.845 | 99.99th=[ 802] 00:26:51.845 bw ( KiB/s): min=12312, max=255488, per=10.23%, avg=79730.90, stdev=58299.58, samples=20 00:26:51.845 iops : min= 48, max= 998, avg=311.40, stdev=227.76, samples=20 00:26:51.845 lat (msec) : 2=0.60%, 4=2.23%, 10=11.08%, 20=12.15%, 50=10.45% 00:26:51.845 lat (msec) : 100=4.56%, 250=24.51%, 500=25.58%, 750=7.99%, 1000=0.85% 00:26:51.845 cpu : usr=0.92%, sys=1.22%, ctx=2392, majf=0, minf=1 00:26:51.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.845 issued rwts: total=0,3178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.845 00:26:51.846 Run status group 0 (all jobs): 00:26:51.846 WRITE: bw=761MiB/s (798MB/s), 44.5MiB/s-107MiB/s (46.6MB/s-112MB/s), io=7763MiB (8140MB), run=10131-10201msec 00:26:51.846 00:26:51.846 Disk stats (read/write): 00:26:51.846 nvme0n1: ios=48/4397, merge=0/0, ticks=2605/1232526, in_queue=1235131, util=100.00% 00:26:51.846 nvme10n1: ios=47/6558, merge=0/0, ticks=1959/1210251, in_queue=1212210, util=100.00% 00:26:51.846 nvme1n1: ios=43/8699, merge=0/0, ticks=1340/1255212, in_queue=1256552, util=100.00% 00:26:51.846 nvme2n1: ios=46/6111, merge=0/0, ticks=852/1205153, in_queue=1206005, util=100.00% 00:26:51.846 nvme3n1: ios=0/6532, merge=0/0, ticks=0/1203547, in_queue=1203547, util=97.91% 00:26:51.846 nvme4n1: ios=42/5077, merge=0/0, ticks=3418/1212876, in_queue=1216294, util=100.00% 00:26:51.846 nvme5n1: ios=42/4550, merge=0/0, ticks=1703/1192590, in_queue=1194293, util=100.00% 00:26:51.846 nvme6n1: ios=44/4092, merge=0/0, ticks=4352/1227385, in_queue=1231737, util=100.00% 00:26:51.846 nvme7n1: ios=42/3610, merge=0/0, ticks=2408/1209555, in_queue=1211963, util=100.00% 00:26:51.846 nvme8n1: ios=43/5100, merge=0/0, ticks=2443/1204631, in_queue=1207074, util=100.00% 00:26:51.846 nvme9n1: ios=45/6197, merge=0/0, ticks=3984/1186701, in_queue=1190685, util=100.00% 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:51.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:51.846 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.846 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:52.104 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:52.104 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:52.104 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.104 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.104 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:52.104 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.105 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:52.105 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.105 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:52.363 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.363 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:52.622 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:52.622 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:52.622 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:52.623 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:52.623 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:52.881 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:52.881 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.881 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.140 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:53.140 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.140 rmmod nvme_tcp 00:26:53.140 rmmod nvme_fabrics 00:26:53.140 rmmod nvme_keyring 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 1431605 ']' 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 1431605 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1431605 ']' 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1431605 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431605 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431605' 00:26:53.140 killing process with pid 1431605 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1431605 00:26:53.140 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1431605 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.706 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.240 00:26:56.240 real 1m0.155s 00:26:56.240 user 3m26.655s 00:26:56.240 sys 0m16.390s 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.240 ************************************ 00:26:56.240 END TEST nvmf_multiconnection 00:26:56.240 ************************************ 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:56.240 ************************************ 00:26:56.240 START TEST nvmf_initiator_timeout 00:26:56.240 ************************************ 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.240 * Looking for test storage... 00:26:56.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.240 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:56.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.241 --rc genhtml_branch_coverage=1 00:26:56.241 --rc genhtml_function_coverage=1 00:26:56.241 --rc genhtml_legend=1 00:26:56.241 --rc geninfo_all_blocks=1 00:26:56.241 --rc geninfo_unexecuted_blocks=1 00:26:56.241 00:26:56.241 ' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:56.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.241 --rc genhtml_branch_coverage=1 00:26:56.241 --rc genhtml_function_coverage=1 00:26:56.241 --rc genhtml_legend=1 00:26:56.241 --rc geninfo_all_blocks=1 00:26:56.241 --rc geninfo_unexecuted_blocks=1 00:26:56.241 00:26:56.241 ' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:56.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.241 --rc genhtml_branch_coverage=1 00:26:56.241 --rc genhtml_function_coverage=1 00:26:56.241 --rc genhtml_legend=1 00:26:56.241 --rc geninfo_all_blocks=1 00:26:56.241 --rc geninfo_unexecuted_blocks=1 00:26:56.241 00:26:56.241 ' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:56.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.241 --rc genhtml_branch_coverage=1 00:26:56.241 --rc genhtml_function_coverage=1 00:26:56.241 --rc genhtml_legend=1 00:26:56.241 --rc geninfo_all_blocks=1 00:26:56.241 --rc geninfo_unexecuted_blocks=1 00:26:56.241 00:26:56.241 ' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.241 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:58.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:58.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:58.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:58.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.148 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.149 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:26:58.149 00:26:58.149 --- 10.0.0.2 ping statistics --- 00:26:58.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.149 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:58.149 00:26:58.149 --- 10.0.0.1 ping statistics --- 00:26:58.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.149 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=1440293 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 1440293 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1440293 ']' 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.149 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.459 [2024-11-02 14:42:50.236811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:58.459 [2024-11-02 14:42:50.236896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.459 [2024-11-02 14:42:50.309801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.459 [2024-11-02 14:42:50.403923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.459 [2024-11-02 14:42:50.403982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.459 [2024-11-02 14:42:50.403999] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.459 [2024-11-02 14:42:50.404012] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.459 [2024-11-02 14:42:50.404023] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.459 [2024-11-02 14:42:50.404089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.459 [2024-11-02 14:42:50.404145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.459 [2024-11-02 14:42:50.404273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.459 [2024-11-02 14:42:50.404277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.739 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.739 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:58.739 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:58.739 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:58.739 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 Malloc0 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 Delay0 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 [2024-11-02 14:42:50.600560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.740 [2024-11-02 14:42:50.628835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.740 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:59.309 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:59.309 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:59.309 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.309 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:59.309 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1440604 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:01.211 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:01.470 [global] 00:27:01.470 thread=1 00:27:01.470 invalidate=1 00:27:01.470 rw=write 00:27:01.470 time_based=1 00:27:01.470 runtime=60 00:27:01.470 ioengine=libaio 00:27:01.470 direct=1 00:27:01.470 bs=4096 00:27:01.470 iodepth=1 00:27:01.470 norandommap=0 00:27:01.470 numjobs=1 00:27:01.470 00:27:01.470 verify_dump=1 00:27:01.470 verify_backlog=512 00:27:01.470 verify_state_save=0 00:27:01.470 do_verify=1 00:27:01.470 verify=crc32c-intel 00:27:01.470 [job0] 00:27:01.470 filename=/dev/nvme0n1 00:27:01.470 Could not set queue depth (nvme0n1) 00:27:01.470 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:01.470 fio-3.35 00:27:01.470 Starting 1 thread 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.759 true 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.759 true 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.759 true 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.759 true 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.759 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.289 true 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.289 true 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.289 true 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.289 true 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:07.289 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1440604 00:28:03.530 00:28:03.530 job0: (groupid=0, jobs=1): err= 0: pid=1440708: Sat Nov 2 14:43:53 2024 00:28:03.530 read: IOPS=24, BW=99.4KiB/s (102kB/s)(5968KiB/60022msec) 00:28:03.530 slat (usec): min=4, max=13828, avg=27.87, stdev=357.66 00:28:03.530 clat (usec): min=329, max=41349k, avg=39950.80, stdev=1070321.31 00:28:03.530 lat (usec): min=334, max=41349k, avg=39978.67, stdev=1070321.86 00:28:03.530 clat percentiles (usec): 00:28:03.530 | 1.00th=[ 343], 5.00th=[ 367], 10.00th=[ 379], 00:28:03.530 | 20.00th=[ 396], 30.00th=[ 412], 40.00th=[ 445], 00:28:03.530 | 50.00th=[ 457], 60.00th=[ 482], 70.00th=[ 537], 00:28:03.530 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 42206], 00:28:03.530 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:28:03.530 | 99.95th=[17112761], 99.99th=[17112761] 00:28:03.530 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60022msec); 0 zone resets 00:28:03.530 slat (nsec): min=5594, max=65190, avg=9910.75, stdev=5307.51 00:28:03.530 clat (usec): min=195, max=411, avg=224.94, stdev=15.90 00:28:03.530 lat (usec): min=201, max=476, avg=234.85, stdev=18.81 00:28:03.530 clat percentiles (usec): 00:28:03.530 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:28:03.530 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:28:03.530 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 247], 00:28:03.530 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 408], 99.95th=[ 412], 00:28:03.530 | 99.99th=[ 412] 00:28:03.530 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:28:03.530 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:28:03.530 lat (usec) : 250=48.78%, 500=34.97%, 750=1.95% 00:28:03.530 lat (msec) : 50=14.27%, >=2000=0.03% 00:28:03.530 cpu : usr=0.04%, sys=0.08%, ctx=3029, majf=0, minf=1 00:28:03.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.530 issued rwts: total=1492,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:03.530 00:28:03.530 Run status group 0 (all jobs): 00:28:03.530 READ: bw=99.4KiB/s (102kB/s), 99.4KiB/s-99.4KiB/s (102kB/s-102kB/s), io=5968KiB (6111kB), run=60022-60022msec 00:28:03.530 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60022-60022msec 00:28:03.530 00:28:03.530 Disk stats (read/write): 00:28:03.530 nvme0n1: ios=1588/1536, merge=0/0, ticks=18507/321, in_queue=18828, util=99.69% 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:03.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:03.531 nvmf hotplug test: fio successful as expected 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.531 rmmod nvme_tcp 00:28:03.531 rmmod nvme_fabrics 00:28:03.531 rmmod nvme_keyring 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 1440293 ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 1440293 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1440293 ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1440293 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1440293 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1440293' 00:28:03.531 killing process with pid 1440293 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1440293 00:28:03.531 14:43:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1440293 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.531 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.469 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.469 00:28:04.469 real 1m8.346s 00:28:04.469 user 4m11.608s 00:28:04.469 sys 0m5.993s 00:28:04.469 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.469 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.469 ************************************ 00:28:04.469 END TEST nvmf_initiator_timeout 00:28:04.469 ************************************ 00:28:04.469 14:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:04.469 14:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:04.470 14:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:04.470 14:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.470 14:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:06.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:06.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:06.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:06.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.375 ************************************ 00:28:06.375 START TEST nvmf_perf_adq 00:28:06.375 ************************************ 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.375 * Looking for test storage... 00:28:06.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.375 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.376 --rc genhtml_branch_coverage=1 00:28:06.376 --rc genhtml_function_coverage=1 00:28:06.376 --rc genhtml_legend=1 00:28:06.376 --rc geninfo_all_blocks=1 00:28:06.376 --rc geninfo_unexecuted_blocks=1 00:28:06.376 00:28:06.376 ' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.376 --rc genhtml_branch_coverage=1 00:28:06.376 --rc genhtml_function_coverage=1 00:28:06.376 --rc genhtml_legend=1 00:28:06.376 --rc geninfo_all_blocks=1 00:28:06.376 --rc geninfo_unexecuted_blocks=1 00:28:06.376 00:28:06.376 ' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.376 --rc genhtml_branch_coverage=1 00:28:06.376 --rc genhtml_function_coverage=1 00:28:06.376 --rc genhtml_legend=1 00:28:06.376 --rc geninfo_all_blocks=1 00:28:06.376 --rc geninfo_unexecuted_blocks=1 00:28:06.376 00:28:06.376 ' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.376 --rc genhtml_branch_coverage=1 00:28:06.376 --rc genhtml_function_coverage=1 00:28:06.376 --rc genhtml_legend=1 00:28:06.376 --rc geninfo_all_blocks=1 00:28:06.376 --rc geninfo_unexecuted_blocks=1 00:28:06.376 00:28:06.376 ' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.376 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.922 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:08.923 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:09.182 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:11.719 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.993 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:16.994 00:28:16.994 --- 10.0.0.2 ping statistics --- 00:28:16.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.994 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:28:16.994 00:28:16.994 --- 10.0.0.1 ping statistics --- 00:28:16.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.994 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:16.994 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=1452319 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 1452319 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1452319 ']' 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.995 [2024-11-02 14:44:08.681225] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:16.995 [2024-11-02 14:44:08.681315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.995 [2024-11-02 14:44:08.747675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.995 [2024-11-02 14:44:08.838557] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.995 [2024-11-02 14:44:08.838634] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.995 [2024-11-02 14:44:08.838650] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.995 [2024-11-02 14:44:08.838664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.995 [2024-11-02 14:44:08.838676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.995 [2024-11-02 14:44:08.838764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.995 [2024-11-02 14:44:08.838830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.995 [2024-11-02 14:44:08.838927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.995 [2024-11-02 14:44:08.838929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.995 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.255 [2024-11-02 14:44:09.087075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.255 Malloc1 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.255 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.256 [2024-11-02 14:44:09.140439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.256 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.256 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1452358 00:28:17.256 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:17.256 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:19.161 "tick_rate": 2700000000, 00:28:19.161 "poll_groups": [ 00:28:19.161 { 00:28:19.161 "name": "nvmf_tgt_poll_group_000", 00:28:19.161 "admin_qpairs": 1, 00:28:19.161 "io_qpairs": 1, 00:28:19.161 "current_admin_qpairs": 1, 00:28:19.161 "current_io_qpairs": 1, 00:28:19.161 "pending_bdev_io": 0, 00:28:19.161 "completed_nvme_io": 16821, 00:28:19.161 "transports": [ 00:28:19.161 { 00:28:19.161 "trtype": "TCP" 00:28:19.161 } 00:28:19.161 ] 00:28:19.161 }, 00:28:19.161 { 00:28:19.161 "name": "nvmf_tgt_poll_group_001", 00:28:19.161 "admin_qpairs": 0, 00:28:19.161 "io_qpairs": 1, 00:28:19.161 "current_admin_qpairs": 0, 00:28:19.161 "current_io_qpairs": 1, 00:28:19.161 "pending_bdev_io": 0, 00:28:19.161 "completed_nvme_io": 19800, 00:28:19.161 "transports": [ 00:28:19.161 { 00:28:19.161 "trtype": "TCP" 00:28:19.161 } 00:28:19.161 ] 00:28:19.161 }, 00:28:19.161 { 00:28:19.161 "name": "nvmf_tgt_poll_group_002", 00:28:19.161 "admin_qpairs": 0, 00:28:19.161 "io_qpairs": 1, 00:28:19.161 "current_admin_qpairs": 0, 00:28:19.161 "current_io_qpairs": 1, 00:28:19.161 "pending_bdev_io": 0, 00:28:19.161 "completed_nvme_io": 20242, 00:28:19.161 "transports": [ 00:28:19.161 { 00:28:19.161 "trtype": "TCP" 00:28:19.161 } 00:28:19.161 ] 00:28:19.161 }, 00:28:19.161 { 00:28:19.161 "name": "nvmf_tgt_poll_group_003", 00:28:19.161 "admin_qpairs": 0, 00:28:19.161 "io_qpairs": 1, 00:28:19.161 "current_admin_qpairs": 0, 00:28:19.161 "current_io_qpairs": 1, 00:28:19.161 "pending_bdev_io": 0, 00:28:19.161 "completed_nvme_io": 18817, 00:28:19.161 "transports": [ 00:28:19.161 { 00:28:19.161 "trtype": "TCP" 00:28:19.161 } 00:28:19.161 ] 00:28:19.161 } 00:28:19.161 ] 00:28:19.161 }' 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:19.161 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1452358 00:28:27.280 Initializing NVMe Controllers 00:28:27.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:27.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:27.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:27.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:27.280 Initialization complete. Launching workers. 00:28:27.280 ======================================================== 00:28:27.280 Latency(us) 00:28:27.280 Device Information : IOPS MiB/s Average min max 00:28:27.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9928.10 38.78 6448.27 3143.85 9751.90 00:28:27.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10499.10 41.01 6096.40 2649.51 8194.29 00:28:27.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10705.30 41.82 5979.52 1735.29 7708.89 00:28:27.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8887.80 34.72 7202.11 1730.67 11474.78 00:28:27.280 ======================================================== 00:28:27.280 Total : 40020.30 156.33 6397.98 1730.67 11474.78 00:28:27.280 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:27.280 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.281 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.281 rmmod nvme_tcp 00:28:27.281 rmmod nvme_fabrics 00:28:27.281 rmmod nvme_keyring 00:28:27.281 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.538 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:27.538 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:27.538 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 1452319 ']' 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 1452319 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1452319 ']' 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1452319 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1452319 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1452319' 00:28:27.539 killing process with pid 1452319 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1452319 00:28:27.539 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1452319 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.796 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.702 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.702 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:29.702 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:29.702 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:30.640 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:33.172 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.505 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:28:38.506 00:28:38.506 --- 10.0.0.2 ping statistics --- 00:28:38.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.506 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:28:38.506 00:28:38.506 --- 10.0.0.1 ping statistics --- 00:28:38.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.506 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:38.506 net.core.busy_poll = 1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:38.506 net.core.busy_read = 1 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:38.506 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=1455079 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 1455079 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1455079 ']' 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:38.506 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.506 [2024-11-02 14:44:30.255094] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:38.506 [2024-11-02 14:44:30.255183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.506 [2024-11-02 14:44:30.330569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.506 [2024-11-02 14:44:30.424200] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.506 [2024-11-02 14:44:30.424284] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.506 [2024-11-02 14:44:30.424301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.506 [2024-11-02 14:44:30.424315] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.506 [2024-11-02 14:44:30.424326] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.507 [2024-11-02 14:44:30.424434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.507 [2024-11-02 14:44:30.424504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.507 [2024-11-02 14:44:30.424612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.507 [2024-11-02 14:44:30.424615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.507 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 [2024-11-02 14:44:30.687152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 Malloc1 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.788 [2024-11-02 14:44:30.740152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1455112 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:38.788 14:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:40.698 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:40.698 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.698 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:40.958 "tick_rate": 2700000000, 00:28:40.958 "poll_groups": [ 00:28:40.958 { 00:28:40.958 "name": "nvmf_tgt_poll_group_000", 00:28:40.958 "admin_qpairs": 1, 00:28:40.958 "io_qpairs": 2, 00:28:40.958 "current_admin_qpairs": 1, 00:28:40.958 "current_io_qpairs": 2, 00:28:40.958 "pending_bdev_io": 0, 00:28:40.958 "completed_nvme_io": 25616, 00:28:40.958 "transports": [ 00:28:40.958 { 00:28:40.958 "trtype": "TCP" 00:28:40.958 } 00:28:40.958 ] 00:28:40.958 }, 00:28:40.958 { 00:28:40.958 "name": "nvmf_tgt_poll_group_001", 00:28:40.958 "admin_qpairs": 0, 00:28:40.958 "io_qpairs": 2, 00:28:40.958 "current_admin_qpairs": 0, 00:28:40.958 "current_io_qpairs": 2, 00:28:40.958 "pending_bdev_io": 0, 00:28:40.958 "completed_nvme_io": 25462, 00:28:40.958 "transports": [ 00:28:40.958 { 00:28:40.958 "trtype": "TCP" 00:28:40.958 } 00:28:40.958 ] 00:28:40.958 }, 00:28:40.958 { 00:28:40.958 "name": "nvmf_tgt_poll_group_002", 00:28:40.958 "admin_qpairs": 0, 00:28:40.958 "io_qpairs": 0, 00:28:40.958 "current_admin_qpairs": 0, 00:28:40.958 "current_io_qpairs": 0, 00:28:40.958 "pending_bdev_io": 0, 00:28:40.958 "completed_nvme_io": 0, 00:28:40.958 "transports": [ 00:28:40.958 { 00:28:40.958 "trtype": "TCP" 00:28:40.958 } 00:28:40.958 ] 00:28:40.958 }, 00:28:40.958 { 00:28:40.958 "name": "nvmf_tgt_poll_group_003", 00:28:40.958 "admin_qpairs": 0, 00:28:40.958 "io_qpairs": 0, 00:28:40.958 "current_admin_qpairs": 0, 00:28:40.958 "current_io_qpairs": 0, 00:28:40.958 "pending_bdev_io": 0, 00:28:40.958 "completed_nvme_io": 0, 00:28:40.958 "transports": [ 00:28:40.958 { 00:28:40.958 "trtype": "TCP" 00:28:40.958 } 00:28:40.958 ] 00:28:40.958 } 00:28:40.958 ] 00:28:40.958 }' 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:40.958 14:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1455112 00:28:49.081 Initializing NVMe Controllers 00:28:49.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:49.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:49.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:49.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:49.081 Initialization complete. Launching workers. 00:28:49.081 ======================================================== 00:28:49.081 Latency(us) 00:28:49.081 Device Information : IOPS MiB/s Average min max 00:28:49.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8229.00 32.14 7779.57 1606.28 54668.79 00:28:49.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6320.20 24.69 10127.05 1733.58 54560.61 00:28:49.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7089.40 27.69 9029.21 1423.43 54717.43 00:28:49.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5173.10 20.21 12371.99 1991.38 55071.30 00:28:49.081 ======================================================== 00:28:49.081 Total : 26811.70 104.73 9549.42 1423.43 55071.30 00:28:49.081 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.081 rmmod nvme_tcp 00:28:49.081 rmmod nvme_fabrics 00:28:49.081 rmmod nvme_keyring 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 1455079 ']' 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 1455079 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1455079 ']' 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1455079 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1455079 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1455079' 00:28:49.081 killing process with pid 1455079 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1455079 00:28:49.081 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1455079 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.341 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.245 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.245 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:51.245 00:28:51.245 real 0m45.029s 00:28:51.245 user 2m32.729s 00:28:51.245 sys 0m12.230s 00:28:51.245 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:51.245 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.245 ************************************ 00:28:51.245 END TEST nvmf_perf_adq 00:28:51.245 ************************************ 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:51.504 ************************************ 00:28:51.504 START TEST nvmf_shutdown 00:28:51.504 ************************************ 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:51.504 * Looking for test storage... 00:28:51.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.504 --rc genhtml_branch_coverage=1 00:28:51.504 --rc genhtml_function_coverage=1 00:28:51.504 --rc genhtml_legend=1 00:28:51.504 --rc geninfo_all_blocks=1 00:28:51.504 --rc geninfo_unexecuted_blocks=1 00:28:51.504 00:28:51.504 ' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.504 --rc genhtml_branch_coverage=1 00:28:51.504 --rc genhtml_function_coverage=1 00:28:51.504 --rc genhtml_legend=1 00:28:51.504 --rc geninfo_all_blocks=1 00:28:51.504 --rc geninfo_unexecuted_blocks=1 00:28:51.504 00:28:51.504 ' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.504 --rc genhtml_branch_coverage=1 00:28:51.504 --rc genhtml_function_coverage=1 00:28:51.504 --rc genhtml_legend=1 00:28:51.504 --rc geninfo_all_blocks=1 00:28:51.504 --rc geninfo_unexecuted_blocks=1 00:28:51.504 00:28:51.504 ' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.504 --rc genhtml_branch_coverage=1 00:28:51.504 --rc genhtml_function_coverage=1 00:28:51.504 --rc genhtml_legend=1 00:28:51.504 --rc geninfo_all_blocks=1 00:28:51.504 --rc geninfo_unexecuted_blocks=1 00:28:51.504 00:28:51.504 ' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.504 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:51.505 ************************************ 00:28:51.505 START TEST nvmf_shutdown_tc1 00:28:51.505 ************************************ 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.505 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.036 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:28:54.037 00:28:54.037 --- 10.0.0.2 ping statistics --- 00:28:54.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.037 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:28:54.037 00:28:54.037 --- 10.0.0.1 ping statistics --- 00:28:54.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.037 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=1458283 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 1458283 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1458283 ']' 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.037 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.037 [2024-11-02 14:44:45.793616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:54.037 [2024-11-02 14:44:45.793694] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.037 [2024-11-02 14:44:45.866026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.037 [2024-11-02 14:44:45.958860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.037 [2024-11-02 14:44:45.958926] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.037 [2024-11-02 14:44:45.958950] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.037 [2024-11-02 14:44:45.958964] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.037 [2024-11-02 14:44:45.958976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.037 [2024-11-02 14:44:45.959089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.037 [2024-11-02 14:44:45.959201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.037 [2024-11-02 14:44:45.959272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.037 [2024-11-02 14:44:45.959291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.296 [2024-11-02 14:44:46.124337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:54.296 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:54.297 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.297 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.297 Malloc1 00:28:54.297 [2024-11-02 14:44:46.217848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.297 Malloc2 00:28:54.297 Malloc3 00:28:54.297 Malloc4 00:28:54.555 Malloc5 00:28:54.555 Malloc6 00:28:54.555 Malloc7 00:28:54.555 Malloc8 00:28:54.555 Malloc9 00:28:54.813 Malloc10 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1458461 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1458461 /var/tmp/bdevperf.sock 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1458461 ']' 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.813 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:54.814 { 00:28:54.814 "params": { 00:28:54.814 "name": "Nvme$subsystem", 00:28:54.814 "trtype": "$TEST_TRANSPORT", 00:28:54.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.814 "adrfam": "ipv4", 00:28:54.814 "trsvcid": "$NVMF_PORT", 00:28:54.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.814 "hdgst": ${hdgst:-false}, 00:28:54.814 "ddgst": ${ddgst:-false} 00:28:54.814 }, 00:28:54.814 "method": "bdev_nvme_attach_controller" 00:28:54.814 } 00:28:54.814 EOF 00:28:54.814 )") 00:28:54.814 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:54.815 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:54.815 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:54.815 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme1", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme2", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme3", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme4", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme5", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme6", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme7", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme8", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme9", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 },{ 00:28:54.815 "params": { 00:28:54.815 "name": "Nvme10", 00:28:54.815 "trtype": "tcp", 00:28:54.815 "traddr": "10.0.0.2", 00:28:54.815 "adrfam": "ipv4", 00:28:54.815 "trsvcid": "4420", 00:28:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.815 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.815 "hdgst": false, 00:28:54.815 "ddgst": false 00:28:54.815 }, 00:28:54.815 "method": "bdev_nvme_attach_controller" 00:28:54.815 }' 00:28:54.815 [2024-11-02 14:44:46.720428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:54.815 [2024-11-02 14:44:46.720505] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:54.815 [2024-11-02 14:44:46.786026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.073 [2024-11-02 14:44:46.874194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1458461 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:56.971 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:57.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1458461 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1458283 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.905 { 00:28:57.905 "params": { 00:28:57.905 "name": "Nvme$subsystem", 00:28:57.905 "trtype": "$TEST_TRANSPORT", 00:28:57.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.905 "adrfam": "ipv4", 00:28:57.905 "trsvcid": "$NVMF_PORT", 00:28:57.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.905 "hdgst": ${hdgst:-false}, 00:28:57.905 "ddgst": ${ddgst:-false} 00:28:57.905 }, 00:28:57.905 "method": "bdev_nvme_attach_controller" 00:28:57.905 } 00:28:57.905 EOF 00:28:57.905 )") 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.905 { 00:28:57.905 "params": { 00:28:57.905 "name": "Nvme$subsystem", 00:28:57.905 "trtype": "$TEST_TRANSPORT", 00:28:57.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.905 "adrfam": "ipv4", 00:28:57.905 "trsvcid": "$NVMF_PORT", 00:28:57.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.905 "hdgst": ${hdgst:-false}, 00:28:57.905 "ddgst": ${ddgst:-false} 00:28:57.905 }, 00:28:57.905 "method": "bdev_nvme_attach_controller" 00:28:57.905 } 00:28:57.905 EOF 00:28:57.905 )") 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.905 { 00:28:57.905 "params": { 00:28:57.905 "name": "Nvme$subsystem", 00:28:57.905 "trtype": "$TEST_TRANSPORT", 00:28:57.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.905 "adrfam": "ipv4", 00:28:57.905 "trsvcid": "$NVMF_PORT", 00:28:57.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.905 "hdgst": ${hdgst:-false}, 00:28:57.905 "ddgst": ${ddgst:-false} 00:28:57.905 }, 00:28:57.905 "method": "bdev_nvme_attach_controller" 00:28:57.905 } 00:28:57.905 EOF 00:28:57.905 )") 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.905 { 00:28:57.905 "params": { 00:28:57.905 "name": "Nvme$subsystem", 00:28:57.905 "trtype": "$TEST_TRANSPORT", 00:28:57.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.905 "adrfam": "ipv4", 00:28:57.905 "trsvcid": "$NVMF_PORT", 00:28:57.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.905 "hdgst": ${hdgst:-false}, 00:28:57.905 "ddgst": ${ddgst:-false} 00:28:57.905 }, 00:28:57.905 "method": "bdev_nvme_attach_controller" 00:28:57.905 } 00:28:57.905 EOF 00:28:57.905 )") 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.905 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.905 { 00:28:57.905 "params": { 00:28:57.905 "name": "Nvme$subsystem", 00:28:57.905 "trtype": "$TEST_TRANSPORT", 00:28:57.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.905 "adrfam": "ipv4", 00:28:57.905 "trsvcid": "$NVMF_PORT", 00:28:57.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.905 "hdgst": ${hdgst:-false}, 00:28:57.905 "ddgst": ${ddgst:-false} 00:28:57.905 }, 00:28:57.905 "method": "bdev_nvme_attach_controller" 00:28:57.905 } 00:28:57.905 EOF 00:28:57.905 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.906 { 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme$subsystem", 00:28:57.906 "trtype": "$TEST_TRANSPORT", 00:28:57.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "$NVMF_PORT", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.906 "hdgst": ${hdgst:-false}, 00:28:57.906 "ddgst": ${ddgst:-false} 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 } 00:28:57.906 EOF 00:28:57.906 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.906 { 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme$subsystem", 00:28:57.906 "trtype": "$TEST_TRANSPORT", 00:28:57.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "$NVMF_PORT", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.906 "hdgst": ${hdgst:-false}, 00:28:57.906 "ddgst": ${ddgst:-false} 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 } 00:28:57.906 EOF 00:28:57.906 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.906 { 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme$subsystem", 00:28:57.906 "trtype": "$TEST_TRANSPORT", 00:28:57.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "$NVMF_PORT", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.906 "hdgst": ${hdgst:-false}, 00:28:57.906 "ddgst": ${ddgst:-false} 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 } 00:28:57.906 EOF 00:28:57.906 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.906 { 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme$subsystem", 00:28:57.906 "trtype": "$TEST_TRANSPORT", 00:28:57.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "$NVMF_PORT", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.906 "hdgst": ${hdgst:-false}, 00:28:57.906 "ddgst": ${ddgst:-false} 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 } 00:28:57.906 EOF 00:28:57.906 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:57.906 { 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme$subsystem", 00:28:57.906 "trtype": "$TEST_TRANSPORT", 00:28:57.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "$NVMF_PORT", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.906 "hdgst": ${hdgst:-false}, 00:28:57.906 "ddgst": ${ddgst:-false} 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 } 00:28:57.906 EOF 00:28:57.906 )") 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:57.906 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme1", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme2", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme3", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme4", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme5", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme6", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme7", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme8", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme9", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 },{ 00:28:57.906 "params": { 00:28:57.906 "name": "Nvme10", 00:28:57.906 "trtype": "tcp", 00:28:57.906 "traddr": "10.0.0.2", 00:28:57.906 "adrfam": "ipv4", 00:28:57.906 "trsvcid": "4420", 00:28:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:57.906 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:57.906 "hdgst": false, 00:28:57.906 "ddgst": false 00:28:57.906 }, 00:28:57.906 "method": "bdev_nvme_attach_controller" 00:28:57.906 }' 00:28:57.906 [2024-11-02 14:44:49.796433] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:57.907 [2024-11-02 14:44:49.796517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458874 ] 00:28:57.907 [2024-11-02 14:44:49.862505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.907 [2024-11-02 14:44:49.952248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.283 Running I/O for 1 seconds... 00:29:00.660 1818.00 IOPS, 113.62 MiB/s 00:29:00.660 Latency(us) 00:29:00.660 [2024-11-02T13:44:52.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.661 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme1n1 : 1.14 228.89 14.31 0.00 0.00 276241.59 2281.62 256318.58 00:29:00.661 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme2n1 : 1.15 222.51 13.91 0.00 0.00 280343.70 21845.33 262532.36 00:29:00.661 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme3n1 : 1.15 277.19 17.32 0.00 0.00 220439.44 17864.63 239230.67 00:29:00.661 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme4n1 : 1.08 237.71 14.86 0.00 0.00 252837.36 22622.06 246997.90 00:29:00.661 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme5n1 : 1.12 237.32 14.83 0.00 0.00 243590.76 5048.70 250104.79 00:29:00.661 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme6n1 : 1.16 220.65 13.79 0.00 0.00 264658.49 29127.11 285834.05 00:29:00.661 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme7n1 : 1.13 227.34 14.21 0.00 0.00 251512.41 18058.81 257872.02 00:29:00.661 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme8n1 : 1.14 224.09 14.01 0.00 0.00 251159.51 18252.99 256318.58 00:29:00.661 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme9n1 : 1.16 226.70 14.17 0.00 0.00 242565.93 3422.44 259425.47 00:29:00.661 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.661 Verification LBA range: start 0x0 length 0x400 00:29:00.661 Nvme10n1 : 1.17 219.36 13.71 0.00 0.00 248519.30 21651.15 292047.83 00:29:00.661 [2024-11-02T13:44:52.716Z] =================================================================================================================== 00:29:00.661 [2024-11-02T13:44:52.716Z] Total : 2321.76 145.11 0.00 0.00 252371.98 2281.62 292047.83 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.921 rmmod nvme_tcp 00:29:00.921 rmmod nvme_fabrics 00:29:00.921 rmmod nvme_keyring 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 1458283 ']' 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 1458283 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1458283 ']' 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1458283 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1458283 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1458283' 00:29:00.921 killing process with pid 1458283 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1458283 00:29:00.921 14:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1458283 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.492 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.394 00:29:03.394 real 0m11.873s 00:29:03.394 user 0m34.108s 00:29:03.394 sys 0m3.308s 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.394 ************************************ 00:29:03.394 END TEST nvmf_shutdown_tc1 00:29:03.394 ************************************ 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:03.394 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.655 ************************************ 00:29:03.655 START TEST nvmf_shutdown_tc2 00:29:03.655 ************************************ 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:03.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:03.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:03.655 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:03.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:03.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:29:03.656 00:29:03.656 --- 10.0.0.2 ping statistics --- 00:29:03.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.656 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:03.656 00:29:03.656 --- 10.0.0.1 ping statistics --- 00:29:03.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.656 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1459643 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1459643 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1459643 ']' 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:03.656 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.656 [2024-11-02 14:44:55.666288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:03.656 [2024-11-02 14:44:55.666365] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.915 [2024-11-02 14:44:55.734826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.915 [2024-11-02 14:44:55.825724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.915 [2024-11-02 14:44:55.825789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.915 [2024-11-02 14:44:55.825817] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.915 [2024-11-02 14:44:55.825829] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.915 [2024-11-02 14:44:55.825839] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.915 [2024-11-02 14:44:55.825903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.915 [2024-11-02 14:44:55.825963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.915 [2024-11-02 14:44:55.826030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:03.915 [2024-11-02 14:44:55.826032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.915 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.915 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:03.915 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:03.915 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.915 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.174 [2024-11-02 14:44:55.987054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.174 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.174 Malloc1 00:29:04.174 [2024-11-02 14:44:56.066423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.174 Malloc2 00:29:04.174 Malloc3 00:29:04.174 Malloc4 00:29:04.432 Malloc5 00:29:04.432 Malloc6 00:29:04.432 Malloc7 00:29:04.432 Malloc8 00:29:04.432 Malloc9 00:29:04.432 Malloc10 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1459817 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1459817 /var/tmp/bdevperf.sock 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1459817 ']' 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.691 "params": { 00:29:04.691 "name": "Nvme$subsystem", 00:29:04.691 "trtype": "$TEST_TRANSPORT", 00:29:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "$NVMF_PORT", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.691 "hdgst": ${hdgst:-false}, 00:29:04.691 "ddgst": ${ddgst:-false} 00:29:04.691 }, 00:29:04.691 "method": "bdev_nvme_attach_controller" 00:29:04.691 } 00:29:04.691 EOF 00:29:04.691 )") 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.691 "params": { 00:29:04.691 "name": "Nvme$subsystem", 00:29:04.691 "trtype": "$TEST_TRANSPORT", 00:29:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "$NVMF_PORT", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.691 "hdgst": ${hdgst:-false}, 00:29:04.691 "ddgst": ${ddgst:-false} 00:29:04.691 }, 00:29:04.691 "method": "bdev_nvme_attach_controller" 00:29:04.691 } 00:29:04.691 EOF 00:29:04.691 )") 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.691 "params": { 00:29:04.691 "name": "Nvme$subsystem", 00:29:04.691 "trtype": "$TEST_TRANSPORT", 00:29:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "$NVMF_PORT", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.691 "hdgst": ${hdgst:-false}, 00:29:04.691 "ddgst": ${ddgst:-false} 00:29:04.691 }, 00:29:04.691 "method": "bdev_nvme_attach_controller" 00:29:04.691 } 00:29:04.691 EOF 00:29:04.691 )") 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.691 "params": { 00:29:04.691 "name": "Nvme$subsystem", 00:29:04.691 "trtype": "$TEST_TRANSPORT", 00:29:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "$NVMF_PORT", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.691 "hdgst": ${hdgst:-false}, 00:29:04.691 "ddgst": ${ddgst:-false} 00:29:04.691 }, 00:29:04.691 "method": "bdev_nvme_attach_controller" 00:29:04.691 } 00:29:04.691 EOF 00:29:04.691 )") 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.691 "params": { 00:29:04.691 "name": "Nvme$subsystem", 00:29:04.691 "trtype": "$TEST_TRANSPORT", 00:29:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "$NVMF_PORT", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.691 "hdgst": ${hdgst:-false}, 00:29:04.691 "ddgst": ${ddgst:-false} 00:29:04.691 }, 00:29:04.691 "method": "bdev_nvme_attach_controller" 00:29:04.691 } 00:29:04.691 EOF 00:29:04.691 )") 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.691 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.691 { 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme$subsystem", 00:29:04.692 "trtype": "$TEST_TRANSPORT", 00:29:04.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "$NVMF_PORT", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.692 "hdgst": ${hdgst:-false}, 00:29:04.692 "ddgst": ${ddgst:-false} 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 } 00:29:04.692 EOF 00:29:04.692 )") 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.692 { 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme$subsystem", 00:29:04.692 "trtype": "$TEST_TRANSPORT", 00:29:04.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "$NVMF_PORT", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.692 "hdgst": ${hdgst:-false}, 00:29:04.692 "ddgst": ${ddgst:-false} 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 } 00:29:04.692 EOF 00:29:04.692 )") 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.692 { 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme$subsystem", 00:29:04.692 "trtype": "$TEST_TRANSPORT", 00:29:04.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "$NVMF_PORT", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.692 "hdgst": ${hdgst:-false}, 00:29:04.692 "ddgst": ${ddgst:-false} 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 } 00:29:04.692 EOF 00:29:04.692 )") 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.692 { 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme$subsystem", 00:29:04.692 "trtype": "$TEST_TRANSPORT", 00:29:04.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "$NVMF_PORT", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.692 "hdgst": ${hdgst:-false}, 00:29:04.692 "ddgst": ${ddgst:-false} 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 } 00:29:04.692 EOF 00:29:04.692 )") 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.692 { 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme$subsystem", 00:29:04.692 "trtype": "$TEST_TRANSPORT", 00:29:04.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "$NVMF_PORT", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.692 "hdgst": ${hdgst:-false}, 00:29:04.692 "ddgst": ${ddgst:-false} 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 } 00:29:04.692 EOF 00:29:04.692 )") 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:29:04.692 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme1", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme2", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme3", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme4", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme5", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme6", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme7", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme8", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme9", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 },{ 00:29:04.692 "params": { 00:29:04.692 "name": "Nvme10", 00:29:04.692 "trtype": "tcp", 00:29:04.692 "traddr": "10.0.0.2", 00:29:04.692 "adrfam": "ipv4", 00:29:04.692 "trsvcid": "4420", 00:29:04.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.692 "hdgst": false, 00:29:04.692 "ddgst": false 00:29:04.692 }, 00:29:04.692 "method": "bdev_nvme_attach_controller" 00:29:04.692 }' 00:29:04.692 [2024-11-02 14:44:56.563195] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:04.692 [2024-11-02 14:44:56.563304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459817 ] 00:29:04.692 [2024-11-02 14:44:56.627518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.692 [2024-11-02 14:44:56.715392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.598 Running I/O for 10 seconds... 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:06.598 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:06.856 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:06.856 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:06.856 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:06.857 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:06.857 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:06.857 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.857 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1459817 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1459817 ']' 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1459817 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1459817 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1459817' 00:29:07.117 killing process with pid 1459817 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1459817 00:29:07.117 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1459817 00:29:07.117 Received shutdown signal, test time was about 0.863547 seconds 00:29:07.117 00:29:07.117 Latency(us) 00:29:07.117 [2024-11-02T13:44:59.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme1n1 : 0.85 225.54 14.10 0.00 0.00 280114.57 21554.06 257872.02 00:29:07.117 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme2n1 : 0.86 224.23 14.01 0.00 0.00 275643.61 22913.33 279620.27 00:29:07.117 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme3n1 : 0.82 232.96 14.56 0.00 0.00 258883.89 14854.83 273406.48 00:29:07.117 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme4n1 : 0.85 226.93 14.18 0.00 0.00 259912.94 19612.25 273406.48 00:29:07.117 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme5n1 : 0.84 228.77 14.30 0.00 0.00 251412.48 18252.99 271853.04 00:29:07.117 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme6n1 : 0.86 222.56 13.91 0.00 0.00 252822.31 21554.06 270299.59 00:29:07.117 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme7n1 : 0.83 231.04 14.44 0.00 0.00 236406.96 17282.09 274959.93 00:29:07.117 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme8n1 : 0.82 162.38 10.15 0.00 0.00 323291.50 3835.07 292047.83 00:29:07.117 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme9n1 : 0.86 222.82 13.93 0.00 0.00 234636.58 20680.25 271853.04 00:29:07.117 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.117 Verification LBA range: start 0x0 length 0x400 00:29:07.117 Nvme10n1 : 0.84 165.10 10.32 0.00 0.00 298889.89 7621.59 316902.97 00:29:07.117 [2024-11-02T13:44:59.172Z] =================================================================================================================== 00:29:07.117 [2024-11-02T13:44:59.172Z] Total : 2142.33 133.90 0.00 0.00 264423.12 3835.07 316902.97 00:29:07.378 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1459643 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.315 rmmod nvme_tcp 00:29:08.315 rmmod nvme_fabrics 00:29:08.315 rmmod nvme_keyring 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 1459643 ']' 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 1459643 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1459643 ']' 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1459643 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.315 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1459643 00:29:08.575 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:08.575 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:08.575 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1459643' 00:29:08.575 killing process with pid 1459643 00:29:08.575 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1459643 00:29:08.575 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1459643 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:08.833 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:29:09.093 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.093 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.093 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.093 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.093 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.998 00:29:10.998 real 0m7.478s 00:29:10.998 user 0m22.208s 00:29:10.998 sys 0m1.514s 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.998 ************************************ 00:29:10.998 END TEST nvmf_shutdown_tc2 00:29:10.998 ************************************ 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.998 ************************************ 00:29:10.998 START TEST nvmf_shutdown_tc3 00:29:10.998 ************************************ 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:10.998 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.999 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.999 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.999 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.999 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:29:11.259 00:29:11.259 --- 10.0.0.2 ping statistics --- 00:29:11.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.259 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:11.259 00:29:11.259 --- 10.0.0.1 ping statistics --- 00:29:11.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.259 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=1460840 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 1460840 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1460840 ']' 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.259 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.518 [2024-11-02 14:45:03.329359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:11.518 [2024-11-02 14:45:03.329448] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.518 [2024-11-02 14:45:03.395036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.518 [2024-11-02 14:45:03.487826] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.518 [2024-11-02 14:45:03.487894] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.518 [2024-11-02 14:45:03.487922] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.518 [2024-11-02 14:45:03.487933] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.518 [2024-11-02 14:45:03.487942] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.518 [2024-11-02 14:45:03.488029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.518 [2024-11-02 14:45:03.488095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.518 [2024-11-02 14:45:03.488160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.518 [2024-11-02 14:45:03.488163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.779 [2024-11-02 14:45:03.649353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.779 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.779 Malloc1 00:29:11.779 [2024-11-02 14:45:03.732416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.779 Malloc2 00:29:11.779 Malloc3 00:29:12.038 Malloc4 00:29:12.038 Malloc5 00:29:12.038 Malloc6 00:29:12.038 Malloc7 00:29:12.038 Malloc8 00:29:12.297 Malloc9 00:29:12.297 Malloc10 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1461020 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1461020 /var/tmp/bdevperf.sock 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1461020 ']' 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.297 "hdgst": ${hdgst:-false}, 00:29:12.297 "ddgst": ${ddgst:-false} 00:29:12.297 }, 00:29:12.297 "method": "bdev_nvme_attach_controller" 00:29:12.297 } 00:29:12.297 EOF 00:29:12.297 )") 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.297 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.297 { 00:29:12.297 "params": { 00:29:12.297 "name": "Nvme$subsystem", 00:29:12.297 "trtype": "$TEST_TRANSPORT", 00:29:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.297 "adrfam": "ipv4", 00:29:12.297 "trsvcid": "$NVMF_PORT", 00:29:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.298 "hdgst": ${hdgst:-false}, 00:29:12.298 "ddgst": ${ddgst:-false} 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 } 00:29:12.298 EOF 00:29:12.298 )") 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.298 { 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme$subsystem", 00:29:12.298 "trtype": "$TEST_TRANSPORT", 00:29:12.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "$NVMF_PORT", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.298 "hdgst": ${hdgst:-false}, 00:29:12.298 "ddgst": ${ddgst:-false} 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 } 00:29:12.298 EOF 00:29:12.298 )") 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:12.298 { 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme$subsystem", 00:29:12.298 "trtype": "$TEST_TRANSPORT", 00:29:12.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "$NVMF_PORT", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.298 "hdgst": ${hdgst:-false}, 00:29:12.298 "ddgst": ${ddgst:-false} 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 } 00:29:12.298 EOF 00:29:12.298 )") 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:29:12.298 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme1", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme2", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme3", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme4", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme5", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme6", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme7", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme8", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme9", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 },{ 00:29:12.298 "params": { 00:29:12.298 "name": "Nvme10", 00:29:12.298 "trtype": "tcp", 00:29:12.298 "traddr": "10.0.0.2", 00:29:12.298 "adrfam": "ipv4", 00:29:12.298 "trsvcid": "4420", 00:29:12.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:12.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:12.298 "hdgst": false, 00:29:12.298 "ddgst": false 00:29:12.298 }, 00:29:12.298 "method": "bdev_nvme_attach_controller" 00:29:12.298 }' 00:29:12.298 [2024-11-02 14:45:04.241356] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:12.298 [2024-11-02 14:45:04.241444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461020 ] 00:29:12.298 [2024-11-02 14:45:04.307768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.556 [2024-11-02 14:45:04.395611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.462 Running I/O for 10 seconds... 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:14.462 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:14.721 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1460840 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1460840 ']' 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1460840 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460840 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460840' 00:29:14.995 killing process with pid 1460840 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1460840 00:29:14.995 14:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1460840 00:29:14.995 [2024-11-02 14:45:06.942368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.995 [2024-11-02 14:45:06.942871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.995 [2024-11-02 14:45:06.942892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.942918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.942941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.942967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.943955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.943979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 [2024-11-02 14:45:06.944353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-11-02 14:45:06.944403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.944432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-11-02 14:45:06.944465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.996 [2024-11-02 14:45:06.944495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-11-02 14:45:06.944521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.996 the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 [2024-11-02 14:45:06.944541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.944549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.996 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.944606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.944654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.944707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-11-02 14:45:06.944757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.944783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-11-02 14:45:06.944810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-11-02 14:45:06.944861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.944889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with [2024-11-02 14:45:06.944910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(6) to be set 00:29:14.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.944929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.944944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with [2024-11-02 14:45:06.944964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(6) to be set 00:29:14.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.944983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.944990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.944995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with [2024-11-02 14:45:06.945018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1the state(6) to be set 00:29:14.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.945050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with [2024-11-02 14:45:06.945072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:29:14.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.945102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db390 is same with the state(6) to be set 00:29:14.997 [2024-11-02 14:45:06.945125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.997 [2024-11-02 14:45:06.945526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.997 [2024-11-02 14:45:06.945562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.998 [2024-11-02 14:45:06.945932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.946055] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e32d0 was disconnected and freed. reset controller. 00:29:14.998 [2024-11-02 14:45:06.947276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.947998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.948009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.948021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.948033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8cb0 is same with the state(6) to be set 00:29:14.998 [2024-11-02 14:45:06.949030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.998 [2024-11-02 14:45:06.949127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5d40 (9): Bad file descriptor 00:29:14.998 [2024-11-02 14:45:06.949215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.998 [2024-11-02 14:45:06.949247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.949283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.998 [2024-11-02 14:45:06.949311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.949336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.998 [2024-11-02 14:45:06.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.998 [2024-11-02 14:45:06.949385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192d3f0 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab530 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with [2024-11-02 14:45:06.949861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:29:14.999 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.949956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with [2024-11-02 14:45:06.949964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:29:14.999 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 [2024-11-02 14:45:06.949984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.949997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with [2024-11-02 14:45:06.949993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:29:14.999 id:0 cdw10:00000000 cdw11:00000000 00:29:14.999 [2024-11-02 14:45:06.950012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-02 14:45:06.950025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.999 the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58c0 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:14.999 [2024-11-02 14:45:06.950732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.950803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9180 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.952069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-11-02 14:45:06.952110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5d40 with addr=10.0.0.2, port=4420 00:29:15.000 [2024-11-02 14:45:06.952138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5d40 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.952701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with [2024-11-02 14:45:06.952899] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.000 the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.952938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.952949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5d40 (9): Bad file descriptor 00:29:15.000 [2024-11-02 14:45:06.952967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.952992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.000 [2024-11-02 14:45:06.953623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.000 [2024-11-02 14:45:06.953657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.000 [2024-11-02 14:45:06.953681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-11-02 14:45:06.953747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.000 the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.000 [2024-11-02 14:45:06.953790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.000 [2024-11-02 14:45:06.953826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.000 [2024-11-02 14:45:06.953862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-02 14:45:06.953885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.000 the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.953913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.000 the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with [2024-11-02 14:45:06.953937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:29:15.000 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.000 [2024-11-02 14:45:06.953958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.953971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.000 the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.000 [2024-11-02 14:45:06.953997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.953995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-11-02 14:45:06.954071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.954096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9670 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.954144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.954952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.955009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.955033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.955060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.955065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.955096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.955123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.955148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-11-02 14:45:06.955172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 [2024-11-02 14:45:06.955199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12the state(6) to be set 00:29:15.001 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.955253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.955276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-11-02 14:45:06.955319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.955349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.001 the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.001 [2024-11-02 14:45:06.955376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(6) to be set 00:29:15.001 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.001 [2024-11-02 14:45:06.955391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:15.002 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12[2024-11-02 14:45:06.955490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12the state(6) to be set 00:29:15.002 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:15.002 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-11-02 14:45:06.955683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with [2024-11-02 14:45:06.955741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12the state(6) to be set 00:29:15.002 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12[2024-11-02 14:45:06.955803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.955834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.955896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-11-02 14:45:06.955919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.955945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9b40 is same with the state(6) to be set 00:29:15.002 [2024-11-02 14:45:06.955967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.955992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.002 [2024-11-02 14:45:06.956527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-11-02 14:45:06.956561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.956969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.956992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with [2024-11-02 14:45:06.957224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:12the state(6) to be set 00:29:15.003 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with [2024-11-02 14:45:06.957314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:12the state(6) to be set 00:29:15.003 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-11-02 14:45:06.957335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with [2024-11-02 14:45:06.957342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:15.003 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 [2024-11-02 14:45:06.957361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:12[2024-11-02 14:45:06.957385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 14:45:06.957411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.003 the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with [2024-11-02 14:45:06.957433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e4470 is same the state(6) to be set 00:29:15.003 with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957525] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e4470 was disconnected and freed. reset controller. 00:29:15.003 [2024-11-02 14:45:06.957536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.003 [2024-11-02 14:45:06.957893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.957988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da030 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.958123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.004 [2024-11-02 14:45:06.959579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.959998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:15.004 [2024-11-02 14:45:06.960158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with [2024-11-02 14:45:06.960187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ab530 (9): the state(6) to be set 00:29:15.004 Bad file descriptor 00:29:15.004 [2024-11-02 14:45:06.960211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d3f0 (9): [2024-11-02 14:45:06.960270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with Bad file descriptor 00:29:15.004 the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.004 [2024-11-02 14:45:06.960325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.960337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.960349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.960364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b34b0 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.960645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.960845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b60 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.960886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b58c0 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.960954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.960982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dfb90 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.961230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.005 [2024-11-02 14:45:06.961429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.961450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1610 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.961576] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.005 [2024-11-02 14:45:06.963340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.963375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.963408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.963434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.963459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5f30 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.963548] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e5f30 was disconnected and freed. reset controller. 00:29:15.005 [2024-11-02 14:45:06.964248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-11-02 14:45:06.964307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ab530 with addr=10.0.0.2, port=4420 00:29:15.005 [2024-11-02 14:45:06.964334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab530 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.965432] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.005 [2024-11-02 14:45:06.965790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.005 [2024-11-02 14:45:06.965828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:15.005 [2024-11-02 14:45:06.965864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a9b60 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.965911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ab530 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.966064] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.005 [2024-11-02 14:45:06.966513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-11-02 14:45:06.966550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5d40 with addr=10.0.0.2, port=4420 00:29:15.005 [2024-11-02 14:45:06.966576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5d40 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.966621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:15.005 [2024-11-02 14:45:06.966647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:15.005 [2024-11-02 14:45:06.966669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:15.005 [2024-11-02 14:45:06.967352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.005 [2024-11-02 14:45:06.967523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-11-02 14:45:06.967562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a9b60 with addr=10.0.0.2, port=4420 00:29:15.005 [2024-11-02 14:45:06.967587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b60 is same with the state(6) to be set 00:29:15.005 [2024-11-02 14:45:06.967617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5d40 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.967958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a9b60 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.967992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.005 [2024-11-02 14:45:06.968015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.005 [2024-11-02 14:45:06.968036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.005 [2024-11-02 14:45:06.968354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.005 [2024-11-02 14:45:06.968385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:15.005 [2024-11-02 14:45:06.968407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:15.005 [2024-11-02 14:45:06.968430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:15.005 [2024-11-02 14:45:06.970478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.005 [2024-11-02 14:45:06.970560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b34b0 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.970618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dfb90 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.970663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1610 (9): Bad file descriptor 00:29:15.005 [2024-11-02 14:45:06.970908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.970946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.970980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.971007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.971036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.971061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.005 [2024-11-02 14:45:06.971089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.005 [2024-11-02 14:45:06.971127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.971970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.971994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.972958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.972983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.973006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.973030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.973054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.973079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.973104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.973128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.973157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.006 [2024-11-02 14:45:06.973180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-11-02 14:45:06.973204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.973958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.973979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.974422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.974450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5940 is same with the state(6) to be set 00:29:15.007 [2024-11-02 14:45:06.976353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.976961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.976989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.977016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.977039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.977065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.977088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.977115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.977136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.977163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.007 [2024-11-02 14:45:06.977185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.007 [2024-11-02 14:45:06.977211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.977959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.977983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.978009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.978033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.978058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.978081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.978105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.978130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.978154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.008 [2024-11-02 14:45:06.978178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.008 [2024-11-02 14:45:06.978709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.978739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.978754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.978773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.978786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da500 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.979996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.008 [2024-11-02 14:45:06.980409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.980682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da9d0 is same with the state(6) to be set 00:29:15.009 [2024-11-02 14:45:06.987234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.987969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.987994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.009 [2024-11-02 14:45:06.988523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.009 [2024-11-02 14:45:06.988547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.988605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.988656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.988759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.988810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.988835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19db3b0 is same with the state(6) to be set 00:29:15.010 [2024-11-02 14:45:06.990911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:15.010 [2024-11-02 14:45:06.990961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:15.010 [2024-11-02 14:45:06.991173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f4e0 is same with the state(6) to be set 00:29:15.010 [2024-11-02 14:45:06.991481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.010 [2024-11-02 14:45:06.991664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f8a0 is same with the state(6) to be set 00:29:15.010 [2024-11-02 14:45:06.991871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.991902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.991946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.991974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.992950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.992985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.010 [2024-11-02 14:45:06.993450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.010 [2024-11-02 14:45:06.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.993935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.993973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.994943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.994971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.995657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.995684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.996671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ba440 is same with the state(6) to be set 00:29:15.011 [2024-11-02 14:45:06.996789] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ba440 was disconnected and freed. reset controller. 00:29:15.011 [2024-11-02 14:45:06.996815] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.011 [2024-11-02 14:45:06.997468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-11-02 14:45:06.997506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b58c0 with addr=10.0.0.2, port=4420 00:29:15.011 [2024-11-02 14:45:06.997534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58c0 is same with the state(6) to be set 00:29:15.011 [2024-11-02 14:45:06.997688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-11-02 14:45:06.997722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192d3f0 with addr=10.0.0.2, port=4420 00:29:15.011 [2024-11-02 14:45:06.997748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192d3f0 is same with the state(6) to be set 00:29:15.011 [2024-11-02 14:45:06.998196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.011 [2024-11-02 14:45:06.998227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.011 [2024-11-02 14:45:06.998278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.998963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.998990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:06.999975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:06.999999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.012 [2024-11-02 14:45:07.000388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.012 [2024-11-02 14:45:07.000415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.000956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.000981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.001703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.001727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b79e0 is same with the state(6) to be set 00:29:15.013 [2024-11-02 14:45:07.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.013 [2024-11-02 14:45:07.003800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.013 [2024-11-02 14:45:07.003825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.003852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.003878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.003905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.003930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.003955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.003980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.004980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.014 [2024-11-02 14:45:07.005838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.014 [2024-11-02 14:45:07.005862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.005890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.005914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.005942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.005966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.005994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.015 [2024-11-02 14:45:07.006655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.015 [2024-11-02 14:45:07.006681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b8f10 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.008728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:15.015 [2024-11-02 14:45:07.008774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.015 [2024-11-02 14:45:07.008808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:15.015 [2024-11-02 14:45:07.008840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:15.015 [2024-11-02 14:45:07.008871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:15.015 [2024-11-02 14:45:07.008994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b58c0 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.009034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d3f0 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.009082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f4e0 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.009136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f8a0 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.009188] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.015 [2024-11-02 14:45:07.009229] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.015 [2024-11-02 14:45:07.009273] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.015 [2024-11-02 14:45:07.009796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:15.015 [2024-11-02 14:45:07.010046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.010084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ab530 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.010113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab530 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.010271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.010306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5d40 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.010331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5d40 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.010499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.010535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a9b60 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.010560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b60 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.010700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.010735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18dfb90 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.010761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dfb90 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.010897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.010932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b34b0 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.010958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b34b0 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.010986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:15.015 [2024-11-02 14:45:07.011009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:15.015 [2024-11-02 14:45:07.011035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:15.015 [2024-11-02 14:45:07.011067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:15.015 [2024-11-02 14:45:07.011092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:15.015 [2024-11-02 14:45:07.011114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:15.015 [2024-11-02 14:45:07.011903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.015 [2024-11-02 14:45:07.011940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.015 [2024-11-02 14:45:07.012082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-11-02 14:45:07.012118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1610 with addr=10.0.0.2, port=4420 00:29:15.015 [2024-11-02 14:45:07.012143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1610 is same with the state(6) to be set 00:29:15.015 [2024-11-02 14:45:07.012175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ab530 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5d40 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a9b60 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dfb90 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b34b0 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012452] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.015 [2024-11-02 14:45:07.012504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1610 (9): Bad file descriptor 00:29:15.015 [2024-11-02 14:45:07.012536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:15.015 [2024-11-02 14:45:07.012559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:15.015 [2024-11-02 14:45:07.012581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:15.015 [2024-11-02 14:45:07.012609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.015 [2024-11-02 14:45:07.012635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.015 [2024-11-02 14:45:07.012656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.015 [2024-11-02 14:45:07.012686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:15.015 [2024-11-02 14:45:07.012709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:15.015 [2024-11-02 14:45:07.012731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:15.015 [2024-11-02 14:45:07.012760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:15.016 [2024-11-02 14:45:07.012785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:15.016 [2024-11-02 14:45:07.012807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:15.016 [2024-11-02 14:45:07.012836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:15.016 [2024-11-02 14:45:07.012860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:15.016 [2024-11-02 14:45:07.012882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:15.016 [2024-11-02 14:45:07.012975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.013004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.013025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.013044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.013072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.013093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:15.016 [2024-11-02 14:45:07.013114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:15.016 [2024-11-02 14:45:07.013136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:15.016 [2024-11-02 14:45:07.013207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.016 [2024-11-02 14:45:07.018939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.018987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.019960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.019984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.016 [2024-11-02 14:45:07.020759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.016 [2024-11-02 14:45:07.020784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.020810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.020835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.020861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.020886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.020919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.020944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.020970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.020994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.021955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.017 [2024-11-02 14:45:07.022387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.017 [2024-11-02 14:45:07.022413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bb970 is same with the state(6) to be set 00:29:15.017 [2024-11-02 14:45:07.024445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:15.017 [2024-11-02 14:45:07.024490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:15.017 [2024-11-02 14:45:07.024523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:15.279 task offset: 26240 on job bdev=Nvme1n1 fails 00:29:15.279 00:29:15.279 Latency(us) 00:29:15.279 [2024-11-02T13:45:07.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme1n1 ended in about 0.89 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme1n1 : 0.89 215.79 13.49 71.93 0.00 219884.71 5631.24 254765.13 00:29:15.279 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme2n1 ended in about 0.90 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme2n1 : 0.90 213.10 13.32 71.03 0.00 218039.09 12281.93 256318.58 00:29:15.279 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme3n1 ended in about 0.92 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme3n1 : 0.92 209.33 13.08 69.78 0.00 217541.40 23301.69 264085.81 00:29:15.279 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme4n1 ended in about 0.91 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme4n1 : 0.91 209.53 13.10 2.21 0.00 279709.77 22330.79 271853.04 00:29:15.279 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme5n1 ended in about 0.94 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme5n1 : 0.94 135.53 8.47 67.77 0.00 286828.22 57865.86 206608.31 00:29:15.279 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme6n1 ended in about 0.95 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme6n1 : 0.95 134.83 8.43 67.41 0.00 282340.06 22427.88 259425.47 00:29:15.279 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme7n1 ended in about 0.94 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme7n1 : 0.94 141.79 8.86 62.90 0.00 271214.62 19126.80 242337.56 00:29:15.279 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Job: Nvme8n1 ended in about 0.97 seconds with error 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.279 Nvme8n1 : 0.97 132.63 8.29 66.31 0.00 275637.54 42525.58 257872.02 00:29:15.279 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.279 Verification LBA range: start 0x0 length 0x400 00:29:15.280 Nvme9n1 : 0.91 210.55 13.16 0.00 0.00 251976.31 24369.68 245444.46 00:29:15.280 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.280 Job: Nvme10n1 ended in about 0.93 seconds with error 00:29:15.280 Verification LBA range: start 0x0 length 0x400 00:29:15.280 Nvme10n1 : 0.93 137.40 8.59 68.70 0.00 252529.52 23787.14 271853.04 00:29:15.280 [2024-11-02T13:45:07.335Z] =================================================================================================================== 00:29:15.280 [2024-11-02T13:45:07.335Z] Total : 1740.46 108.78 548.03 0.00 252199.06 5631.24 271853.04 00:29:15.280 [2024-11-02 14:45:07.051720] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:15.280 [2024-11-02 14:45:07.051824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:15.280 [2024-11-02 14:45:07.052505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.052545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190f4e0 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.052581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f4e0 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.052733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.052769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192d3f0 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.052796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192d3f0 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.052986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.053017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b58c0 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.053045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58c0 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.053190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.053221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190f8a0 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.053249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f8a0 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.053313] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053348] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053381] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053412] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053444] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053473] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:15.280 [2024-11-02 14:45:07.053834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:15.280 [2024-11-02 14:45:07.053865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:15.280 [2024-11-02 14:45:07.053896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:15.280 [2024-11-02 14:45:07.053925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.280 [2024-11-02 14:45:07.053953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:15.280 [2024-11-02 14:45:07.053994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:15.280 [2024-11-02 14:45:07.054121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f4e0 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.054163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d3f0 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.054197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b58c0 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.054229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f8a0 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.054445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.054478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b34b0 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.054505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b34b0 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.054647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.054677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18dfb90 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.054705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dfb90 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.054845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.054876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a9b60 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.054902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b60 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.055065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.055095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5d40 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.055123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5d40 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.055322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.055354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ab530 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.055381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab530 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.055512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-11-02 14:45:07.055543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1610 with addr=10.0.0.2, port=4420 00:29:15.280 [2024-11-02 14:45:07.055570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1610 is same with the state(6) to be set 00:29:15.280 [2024-11-02 14:45:07.055596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.055619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.055646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:15.280 [2024-11-02 14:45:07.055677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.055702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.055723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:15.280 [2024-11-02 14:45:07.055758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.055784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.055807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:15.280 [2024-11-02 14:45:07.055834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.055858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.055879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:15.280 [2024-11-02 14:45:07.055953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.280 [2024-11-02 14:45:07.055981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.280 [2024-11-02 14:45:07.056002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.280 [2024-11-02 14:45:07.056024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.280 [2024-11-02 14:45:07.056050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b34b0 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dfb90 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a9b60 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5d40 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ab530 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1610 (9): Bad file descriptor 00:29:15.280 [2024-11-02 14:45:07.056281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.056310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.056333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:15.280 [2024-11-02 14:45:07.056360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.056385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.056406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:15.280 [2024-11-02 14:45:07.056433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.056457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.056479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:15.280 [2024-11-02 14:45:07.056506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.056530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.056552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.280 [2024-11-02 14:45:07.056579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:15.280 [2024-11-02 14:45:07.056602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:15.280 [2024-11-02 14:45:07.056630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:15.280 [2024-11-02 14:45:07.056657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:15.281 [2024-11-02 14:45:07.056680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:15.281 [2024-11-02 14:45:07.056703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:15.281 [2024-11-02 14:45:07.056777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.281 [2024-11-02 14:45:07.056805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.281 [2024-11-02 14:45:07.056826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.281 [2024-11-02 14:45:07.056847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.281 [2024-11-02 14:45:07.056868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.281 [2024-11-02 14:45:07.056887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.569 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:29:15.569 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 1461020 00:29:16.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (1461020) - No such process 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.526 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.526 rmmod nvme_tcp 00:29:16.526 rmmod nvme_fabrics 00:29:16.526 rmmod nvme_keyring 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.784 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.687 00:29:18.687 real 0m7.660s 00:29:18.687 user 0m18.388s 00:29:18.687 sys 0m1.566s 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.687 ************************************ 00:29:18.687 END TEST nvmf_shutdown_tc3 00:29:18.687 ************************************ 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.687 ************************************ 00:29:18.687 START TEST nvmf_shutdown_tc4 00:29:18.687 ************************************ 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:18.687 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.688 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:29:18.947 00:29:18.947 --- 10.0.0.2 ping statistics --- 00:29:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.947 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:29:18.947 00:29:18.947 --- 10.0.0.1 ping statistics --- 00:29:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.947 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=1462325 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 1462325 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1462325 ']' 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:18.947 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.948 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.948 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 [2024-11-02 14:45:10.940196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:18.948 [2024-11-02 14:45:10.940292] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.206 [2024-11-02 14:45:11.011039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.206 [2024-11-02 14:45:11.102423] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.206 [2024-11-02 14:45:11.102489] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.206 [2024-11-02 14:45:11.102526] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.206 [2024-11-02 14:45:11.102538] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.206 [2024-11-02 14:45:11.102547] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.206 [2024-11-02 14:45:11.102636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.206 [2024-11-02 14:45:11.102699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.206 [2024-11-02 14:45:11.102722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.206 [2024-11-02 14:45:11.102725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.206 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.206 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:19.206 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:19.206 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.206 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.466 [2024-11-02 14:45:11.275847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.466 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.466 Malloc1 00:29:19.466 [2024-11-02 14:45:11.365122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.466 Malloc2 00:29:19.466 Malloc3 00:29:19.466 Malloc4 00:29:19.725 Malloc5 00:29:19.725 Malloc6 00:29:19.725 Malloc7 00:29:19.725 Malloc8 00:29:19.725 Malloc9 00:29:19.982 Malloc10 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=1462502 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:19.982 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:29:19.982 [2024-11-02 14:45:11.878434] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 1462325 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1462325 ']' 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1462325 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462325 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462325' 00:29:25.262 killing process with pid 1462325 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1462325 00:29:25.262 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1462325 00:29:25.262 [2024-11-02 14:45:16.883301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.883404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.883421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.883434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.883446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.883538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7900 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.884987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 [2024-11-02 14:45:16.885000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41a50 is same with the state(6) to be set 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 starting I/O failed: -6 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 starting I/O failed: -6 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 starting I/O failed: -6 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 starting I/O failed: -6 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.262 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 [2024-11-02 14:45:16.892888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.263 starting I/O failed: -6 00:29:25.263 starting I/O failed: -6 00:29:25.263 starting I/O failed: -6 00:29:25.263 starting I/O failed: -6 00:29:25.263 [2024-11-02 14:45:16.894151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe461f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.894195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe461f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.894210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe461f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.894224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe461f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.894237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe461f0 is same with the state(6) to be set 00:29:25.263 starting I/O failed: -6 00:29:25.263 starting I/O failed: -6 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 starting I/O failed: -6 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 Write completed with error (sct=0, sc=8) 00:29:25.263 [2024-11-02 14:45:16.895538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.263 NVMe io qpair process completion error 00:29:25.263 [2024-11-02 14:45:16.895850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.895980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44020 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.896611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe444f0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.897468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe449c0 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.898285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43b50 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.899639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45380 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.899673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45380 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.899688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45380 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.899701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45380 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.899737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45380 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.901284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45850 is same with the state(6) to be set 00:29:25.263 [2024-11-02 14:45:16.902067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.902188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45d20 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with starting I/O failed: -6 00:29:25.264 the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b69a0 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.909860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 starting I/O failed: -6 00:29:25.264 [2024-11-02 14:45:16.909873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with Write completed with error (sct=0, sc=8) 00:29:25.264 the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.909901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6e70 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.910152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.910183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.910198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with the state(6) to be set 00:29:25.264 starting I/O failed: -6 00:29:25.264 [2024-11-02 14:45:16.910211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with the state(6) to be set 00:29:25.264 [2024-11-02 14:45:16.910223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 [2024-11-02 14:45:16.910235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6000 is same with starting I/O failed: -6 00:29:25.264 the state(6) to be set 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.264 Write completed with error (sct=0, sc=8) 00:29:25.264 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 [2024-11-02 14:45:16.914178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.265 NVMe io qpair process completion error 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 [2024-11-02 14:45:16.915438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.265 starting I/O failed: -6 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.265 starting I/O failed: -6 00:29:25.265 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 [2024-11-02 14:45:16.916555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 [2024-11-02 14:45:16.917784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.266 starting I/O failed: -6 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.266 Write completed with error (sct=0, sc=8) 00:29:25.266 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 [2024-11-02 14:45:16.919536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.267 NVMe io qpair process completion error 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 [2024-11-02 14:45:16.920699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 [2024-11-02 14:45:16.921713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.267 Write completed with error (sct=0, sc=8) 00:29:25.267 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 [2024-11-02 14:45:16.922959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 [2024-11-02 14:45:16.925120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.268 NVMe io qpair process completion error 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 starting I/O failed: -6 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.268 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 [2024-11-02 14:45:16.927092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 [2024-11-02 14:45:16.928334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.269 Write completed with error (sct=0, sc=8) 00:29:25.269 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 [2024-11-02 14:45:16.930473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.270 NVMe io qpair process completion error 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 [2024-11-02 14:45:16.931599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 [2024-11-02 14:45:16.932643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.270 starting I/O failed: -6 00:29:25.270 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 [2024-11-02 14:45:16.933887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 starting I/O failed: -6 00:29:25.271 [2024-11-02 14:45:16.937191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.271 NVMe io qpair process completion error 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.271 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 [2024-11-02 14:45:16.941832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 [2024-11-02 14:45:16.942880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.272 starting I/O failed: -6 00:29:25.272 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 [2024-11-02 14:45:16.944112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 [2024-11-02 14:45:16.947067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.273 NVMe io qpair process completion error 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 [2024-11-02 14:45:16.948363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.273 starting I/O failed: -6 00:29:25.273 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 [2024-11-02 14:45:16.949451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 [2024-11-02 14:45:16.950695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.274 starting I/O failed: -6 00:29:25.274 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 [2024-11-02 14:45:16.952774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.275 NVMe io qpair process completion error 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 [2024-11-02 14:45:16.954018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 [2024-11-02 14:45:16.955137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 Write completed with error (sct=0, sc=8) 00:29:25.275 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 [2024-11-02 14:45:16.956300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 [2024-11-02 14:45:16.958464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.276 NVMe io qpair process completion error 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 starting I/O failed: -6 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.276 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 [2024-11-02 14:45:16.960631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 [2024-11-02 14:45:16.961955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.277 Write completed with error (sct=0, sc=8) 00:29:25.277 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 Write completed with error (sct=0, sc=8) 00:29:25.278 starting I/O failed: -6 00:29:25.278 [2024-11-02 14:45:16.965216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.278 NVMe io qpair process completion error 00:29:25.278 Initializing NVMe Controllers 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:25.278 Controller IO queue size 128, less than required. 00:29:25.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:25.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:25.278 Initialization complete. Launching workers. 00:29:25.278 ======================================================== 00:29:25.278 Latency(us) 00:29:25.278 Device Information : IOPS MiB/s Average min max 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1769.67 76.04 72358.98 1154.04 143867.33 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1747.06 75.07 73321.70 818.44 143171.46 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1743.91 74.93 73476.25 1031.76 142056.79 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1741.40 74.83 73607.89 1235.11 141856.31 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1729.89 74.33 74126.88 1040.80 115092.72 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1728.00 74.25 74025.59 1009.07 114386.14 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1735.75 74.58 73917.87 1080.02 127440.83 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1764.43 75.82 72760.84 1123.15 131414.16 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1765.69 75.87 72739.20 1044.40 134408.02 00:29:25.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1773.44 76.20 72453.00 1320.14 114900.21 00:29:25.278 ======================================================== 00:29:25.278 Total : 17499.24 751.92 73273.10 818.44 143867.33 00:29:25.278 00:29:25.278 [2024-11-02 14:45:16.970052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ea160 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e46d0 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5ed0 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e41c0 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e43a0 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3b20 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e6530 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e6200 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e6860 is same with the state(6) to be set 00:29:25.278 [2024-11-02 14:45:16.970807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e4a00 is same with the state(6) to be set 00:29:25.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:25.537 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:29:25.537 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 1462502 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.474 rmmod nvme_tcp 00:29:26.474 rmmod nvme_fabrics 00:29:26.474 rmmod nvme_keyring 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:26.474 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:29:26.733 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.733 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.733 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.733 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.733 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.644 00:29:28.644 real 0m9.880s 00:29:28.644 user 0m20.778s 00:29:28.644 sys 0m6.514s 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:28.644 ************************************ 00:29:28.644 END TEST nvmf_shutdown_tc4 00:29:28.644 ************************************ 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:29:28.644 00:29:28.644 real 0m37.269s 00:29:28.644 user 1m35.664s 00:29:28.644 sys 0m13.121s 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.644 ************************************ 00:29:28.644 END TEST nvmf_shutdown 00:29:28.644 ************************************ 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:28.644 00:29:28.644 real 18m8.958s 00:29:28.644 user 50m26.571s 00:29:28.644 sys 3m57.721s 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.644 14:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:28.644 ************************************ 00:29:28.644 END TEST nvmf_target_extra 00:29:28.644 ************************************ 00:29:28.644 14:45:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.644 14:45:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.644 14:45:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.644 14:45:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.644 ************************************ 00:29:28.644 START TEST nvmf_host 00:29:28.644 ************************************ 00:29:28.644 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.903 * Looking for test storage... 00:29:28.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:28.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.903 --rc genhtml_branch_coverage=1 00:29:28.903 --rc genhtml_function_coverage=1 00:29:28.903 --rc genhtml_legend=1 00:29:28.903 --rc geninfo_all_blocks=1 00:29:28.903 --rc geninfo_unexecuted_blocks=1 00:29:28.903 00:29:28.903 ' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:28.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.903 --rc genhtml_branch_coverage=1 00:29:28.903 --rc genhtml_function_coverage=1 00:29:28.903 --rc genhtml_legend=1 00:29:28.903 --rc geninfo_all_blocks=1 00:29:28.903 --rc geninfo_unexecuted_blocks=1 00:29:28.903 00:29:28.903 ' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:28.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.903 --rc genhtml_branch_coverage=1 00:29:28.903 --rc genhtml_function_coverage=1 00:29:28.903 --rc genhtml_legend=1 00:29:28.903 --rc geninfo_all_blocks=1 00:29:28.903 --rc geninfo_unexecuted_blocks=1 00:29:28.903 00:29:28.903 ' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:28.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.903 --rc genhtml_branch_coverage=1 00:29:28.903 --rc genhtml_function_coverage=1 00:29:28.903 --rc genhtml_legend=1 00:29:28.903 --rc geninfo_all_blocks=1 00:29:28.903 --rc geninfo_unexecuted_blocks=1 00:29:28.903 00:29:28.903 ' 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.903 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.904 ************************************ 00:29:28.904 START TEST nvmf_multicontroller 00:29:28.904 ************************************ 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:28.904 * Looking for test storage... 00:29:28.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:29:28.904 14:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:29.163 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:29.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.164 --rc genhtml_branch_coverage=1 00:29:29.164 --rc genhtml_function_coverage=1 00:29:29.164 --rc genhtml_legend=1 00:29:29.164 --rc geninfo_all_blocks=1 00:29:29.164 --rc geninfo_unexecuted_blocks=1 00:29:29.164 00:29:29.164 ' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:29.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.164 --rc genhtml_branch_coverage=1 00:29:29.164 --rc genhtml_function_coverage=1 00:29:29.164 --rc genhtml_legend=1 00:29:29.164 --rc geninfo_all_blocks=1 00:29:29.164 --rc geninfo_unexecuted_blocks=1 00:29:29.164 00:29:29.164 ' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:29.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.164 --rc genhtml_branch_coverage=1 00:29:29.164 --rc genhtml_function_coverage=1 00:29:29.164 --rc genhtml_legend=1 00:29:29.164 --rc geninfo_all_blocks=1 00:29:29.164 --rc geninfo_unexecuted_blocks=1 00:29:29.164 00:29:29.164 ' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:29.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.164 --rc genhtml_branch_coverage=1 00:29:29.164 --rc genhtml_function_coverage=1 00:29:29.164 --rc genhtml_legend=1 00:29:29.164 --rc geninfo_all_blocks=1 00:29:29.164 --rc geninfo_unexecuted_blocks=1 00:29:29.164 00:29:29.164 ' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.164 14:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.068 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:31.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:31.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:31.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:31.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.069 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:29:31.328 00:29:31.328 --- 10.0.0.2 ping statistics --- 00:29:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.328 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:31.328 00:29:31.328 --- 10.0.0.1 ping statistics --- 00:29:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.328 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=1465308 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 1465308 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1465308 ']' 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.328 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.328 [2024-11-02 14:45:23.332232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:31.328 [2024-11-02 14:45:23.332370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.587 [2024-11-02 14:45:23.409930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.587 [2024-11-02 14:45:23.505232] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.587 [2024-11-02 14:45:23.505330] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.587 [2024-11-02 14:45:23.505346] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.587 [2024-11-02 14:45:23.505357] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.587 [2024-11-02 14:45:23.505368] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.587 [2024-11-02 14:45:23.505452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.587 [2024-11-02 14:45:23.505482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.587 [2024-11-02 14:45:23.505485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.587 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.587 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:31.587 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:31.587 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.587 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.845 [2024-11-02 14:45:23.653124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.845 Malloc0 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.845 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 [2024-11-02 14:45:23.711764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 [2024-11-02 14:45:23.719642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 Malloc1 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1465454 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1465454 /var/tmp/bdevperf.sock 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1465454 ']' 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:31.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.846 14:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.106 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.106 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:32.106 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:32.106 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.106 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.367 NVMe0n1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.367 1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.367 request: 00:29:32.367 { 00:29:32.367 "name": "NVMe0", 00:29:32.367 "trtype": "tcp", 00:29:32.367 "traddr": "10.0.0.2", 00:29:32.367 "adrfam": "ipv4", 00:29:32.367 "trsvcid": "4420", 00:29:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.367 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:32.367 "hostaddr": "10.0.0.1", 00:29:32.367 "prchk_reftag": false, 00:29:32.367 "prchk_guard": false, 00:29:32.367 "hdgst": false, 00:29:32.367 "ddgst": false, 00:29:32.367 "allow_unrecognized_csi": false, 00:29:32.367 "method": "bdev_nvme_attach_controller", 00:29:32.367 "req_id": 1 00:29:32.367 } 00:29:32.367 Got JSON-RPC error response 00:29:32.367 response: 00:29:32.367 { 00:29:32.367 "code": -114, 00:29:32.367 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:32.367 } 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.367 request: 00:29:32.367 { 00:29:32.367 "name": "NVMe0", 00:29:32.367 "trtype": "tcp", 00:29:32.367 "traddr": "10.0.0.2", 00:29:32.367 "adrfam": "ipv4", 00:29:32.367 "trsvcid": "4420", 00:29:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:32.367 "hostaddr": "10.0.0.1", 00:29:32.367 "prchk_reftag": false, 00:29:32.367 "prchk_guard": false, 00:29:32.367 "hdgst": false, 00:29:32.367 "ddgst": false, 00:29:32.367 "allow_unrecognized_csi": false, 00:29:32.367 "method": "bdev_nvme_attach_controller", 00:29:32.367 "req_id": 1 00:29:32.367 } 00:29:32.367 Got JSON-RPC error response 00:29:32.367 response: 00:29:32.367 { 00:29:32.367 "code": -114, 00:29:32.367 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:32.367 } 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.367 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.367 request: 00:29:32.367 { 00:29:32.367 "name": "NVMe0", 00:29:32.367 "trtype": "tcp", 00:29:32.367 "traddr": "10.0.0.2", 00:29:32.367 "adrfam": "ipv4", 00:29:32.367 "trsvcid": "4420", 00:29:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.367 "hostaddr": "10.0.0.1", 00:29:32.367 "prchk_reftag": false, 00:29:32.367 "prchk_guard": false, 00:29:32.367 "hdgst": false, 00:29:32.367 "ddgst": false, 00:29:32.367 "multipath": "disable", 00:29:32.367 "allow_unrecognized_csi": false, 00:29:32.367 "method": "bdev_nvme_attach_controller", 00:29:32.368 "req_id": 1 00:29:32.368 } 00:29:32.368 Got JSON-RPC error response 00:29:32.368 response: 00:29:32.368 { 00:29:32.368 "code": -114, 00:29:32.368 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:32.368 } 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.368 request: 00:29:32.368 { 00:29:32.368 "name": "NVMe0", 00:29:32.368 "trtype": "tcp", 00:29:32.368 "traddr": "10.0.0.2", 00:29:32.368 "adrfam": "ipv4", 00:29:32.368 "trsvcid": "4420", 00:29:32.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.368 "hostaddr": "10.0.0.1", 00:29:32.368 "prchk_reftag": false, 00:29:32.368 "prchk_guard": false, 00:29:32.368 "hdgst": false, 00:29:32.368 "ddgst": false, 00:29:32.368 "multipath": "failover", 00:29:32.368 "allow_unrecognized_csi": false, 00:29:32.368 "method": "bdev_nvme_attach_controller", 00:29:32.368 "req_id": 1 00:29:32.368 } 00:29:32.368 Got JSON-RPC error response 00:29:32.368 response: 00:29:32.368 { 00:29:32.368 "code": -114, 00:29:32.368 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:32.368 } 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.368 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.627 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.627 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.886 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:32.886 14:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:33.825 { 00:29:33.825 "results": [ 00:29:33.825 { 00:29:33.825 "job": "NVMe0n1", 00:29:33.825 "core_mask": "0x1", 00:29:33.825 "workload": "write", 00:29:33.825 "status": "finished", 00:29:33.825 "queue_depth": 128, 00:29:33.825 "io_size": 4096, 00:29:33.825 "runtime": 1.003844, 00:29:33.825 "iops": 18674.21631249477, 00:29:33.825 "mibps": 72.94615747068269, 00:29:33.825 "io_failed": 0, 00:29:33.825 "io_timeout": 0, 00:29:33.825 "avg_latency_us": 6844.004880211482, 00:29:33.825 "min_latency_us": 4126.34074074074, 00:29:33.825 "max_latency_us": 14660.645925925926 00:29:33.825 } 00:29:33.825 ], 00:29:33.825 "core_count": 1 00:29:33.825 } 00:29:33.825 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:33.825 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.825 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1465454 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1465454 ']' 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1465454 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465454 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465454' 00:29:34.083 killing process with pid 1465454 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1465454 00:29:34.083 14:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1465454 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.083 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:34.342 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:34.342 [2024-11-02 14:45:23.828202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:34.342 [2024-11-02 14:45:23.828306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465454 ] 00:29:34.342 [2024-11-02 14:45:23.889202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.342 [2024-11-02 14:45:23.978325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.342 [2024-11-02 14:45:24.702898] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 414ac951-cccd-4091-9901-b7521c8fa231 already exists 00:29:34.342 [2024-11-02 14:45:24.702943] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:414ac951-cccd-4091-9901-b7521c8fa231 alias for bdev NVMe1n1 00:29:34.342 [2024-11-02 14:45:24.702967] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:34.342 Running I/O for 1 seconds... 00:29:34.342 18618.00 IOPS, 72.73 MiB/s 00:29:34.342 Latency(us) 00:29:34.342 [2024-11-02T13:45:26.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.342 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:34.342 NVMe0n1 : 1.00 18674.22 72.95 0.00 0.00 6844.00 4126.34 14660.65 00:29:34.342 [2024-11-02T13:45:26.397Z] =================================================================================================================== 00:29:34.342 [2024-11-02T13:45:26.397Z] Total : 18674.22 72.95 0.00 0.00 6844.00 4126.34 14660.65 00:29:34.342 Received shutdown signal, test time was about 1.000000 seconds 00:29:34.342 00:29:34.342 Latency(us) 00:29:34.342 [2024-11-02T13:45:26.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.342 [2024-11-02T13:45:26.397Z] =================================================================================================================== 00:29:34.342 [2024-11-02T13:45:26.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.342 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.342 rmmod nvme_tcp 00:29:34.342 rmmod nvme_fabrics 00:29:34.342 rmmod nvme_keyring 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 1465308 ']' 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 1465308 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1465308 ']' 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1465308 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465308 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465308' 00:29:34.342 killing process with pid 1465308 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1465308 00:29:34.342 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1465308 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.603 14:45:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.510 14:45:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:36.511 00:29:36.511 real 0m7.666s 00:29:36.511 user 0m12.122s 00:29:36.511 sys 0m2.405s 00:29:36.511 14:45:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:36.511 14:45:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.511 ************************************ 00:29:36.511 END TEST nvmf_multicontroller 00:29:36.511 ************************************ 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.770 ************************************ 00:29:36.770 START TEST nvmf_aer 00:29:36.770 ************************************ 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:36.770 * Looking for test storage... 00:29:36.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.770 --rc genhtml_branch_coverage=1 00:29:36.770 --rc genhtml_function_coverage=1 00:29:36.770 --rc genhtml_legend=1 00:29:36.770 --rc geninfo_all_blocks=1 00:29:36.770 --rc geninfo_unexecuted_blocks=1 00:29:36.770 00:29:36.770 ' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.770 --rc genhtml_branch_coverage=1 00:29:36.770 --rc genhtml_function_coverage=1 00:29:36.770 --rc genhtml_legend=1 00:29:36.770 --rc geninfo_all_blocks=1 00:29:36.770 --rc geninfo_unexecuted_blocks=1 00:29:36.770 00:29:36.770 ' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.770 --rc genhtml_branch_coverage=1 00:29:36.770 --rc genhtml_function_coverage=1 00:29:36.770 --rc genhtml_legend=1 00:29:36.770 --rc geninfo_all_blocks=1 00:29:36.770 --rc geninfo_unexecuted_blocks=1 00:29:36.770 00:29:36.770 ' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.770 --rc genhtml_branch_coverage=1 00:29:36.770 --rc genhtml_function_coverage=1 00:29:36.770 --rc genhtml_legend=1 00:29:36.770 --rc geninfo_all_blocks=1 00:29:36.770 --rc geninfo_unexecuted_blocks=1 00:29:36.770 00:29:36.770 ' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.770 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:36.771 14:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:38.677 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.678 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.678 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.678 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.678 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:29:38.937 00:29:38.937 --- 10.0.0.2 ping statistics --- 00:29:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.937 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:38.937 00:29:38.937 --- 10.0.0.1 ping statistics --- 00:29:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.937 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=1467669 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 1467669 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1467669 ']' 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.937 14:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.937 [2024-11-02 14:45:30.908439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:38.937 [2024-11-02 14:45:30.908516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.937 [2024-11-02 14:45:30.978960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.196 [2024-11-02 14:45:31.072644] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.196 [2024-11-02 14:45:31.072700] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.196 [2024-11-02 14:45:31.072721] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.196 [2024-11-02 14:45:31.072741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.196 [2024-11-02 14:45:31.072757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.196 [2024-11-02 14:45:31.072914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.196 [2024-11-02 14:45:31.072973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.196 [2024-11-02 14:45:31.073089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.196 [2024-11-02 14:45:31.073097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.196 [2024-11-02 14:45:31.242030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:39.196 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.454 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.454 Malloc0 00:29:39.454 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.454 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.455 [2024-11-02 14:45:31.295471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.455 [ 00:29:39.455 { 00:29:39.455 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:39.455 "subtype": "Discovery", 00:29:39.455 "listen_addresses": [], 00:29:39.455 "allow_any_host": true, 00:29:39.455 "hosts": [] 00:29:39.455 }, 00:29:39.455 { 00:29:39.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.455 "subtype": "NVMe", 00:29:39.455 "listen_addresses": [ 00:29:39.455 { 00:29:39.455 "trtype": "TCP", 00:29:39.455 "adrfam": "IPv4", 00:29:39.455 "traddr": "10.0.0.2", 00:29:39.455 "trsvcid": "4420" 00:29:39.455 } 00:29:39.455 ], 00:29:39.455 "allow_any_host": true, 00:29:39.455 "hosts": [], 00:29:39.455 "serial_number": "SPDK00000000000001", 00:29:39.455 "model_number": "SPDK bdev Controller", 00:29:39.455 "max_namespaces": 2, 00:29:39.455 "min_cntlid": 1, 00:29:39.455 "max_cntlid": 65519, 00:29:39.455 "namespaces": [ 00:29:39.455 { 00:29:39.455 "nsid": 1, 00:29:39.455 "bdev_name": "Malloc0", 00:29:39.455 "name": "Malloc0", 00:29:39.455 "nguid": "08320AD2B0E24756A8E5161758518380", 00:29:39.455 "uuid": "08320ad2-b0e2-4756-a8e5-161758518380" 00:29:39.455 } 00:29:39.455 ] 00:29:39.455 } 00:29:39.455 ] 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1467698 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:39.455 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.714 Malloc1 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.714 Asynchronous Event Request test 00:29:39.714 Attaching to 10.0.0.2 00:29:39.714 Attached to 10.0.0.2 00:29:39.714 Registering asynchronous event callbacks... 00:29:39.714 Starting namespace attribute notice tests for all controllers... 00:29:39.714 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:39.714 aer_cb - Changed Namespace 00:29:39.714 Cleaning up... 00:29:39.714 [ 00:29:39.714 { 00:29:39.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:39.714 "subtype": "Discovery", 00:29:39.714 "listen_addresses": [], 00:29:39.714 "allow_any_host": true, 00:29:39.714 "hosts": [] 00:29:39.714 }, 00:29:39.714 { 00:29:39.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.714 "subtype": "NVMe", 00:29:39.714 "listen_addresses": [ 00:29:39.714 { 00:29:39.714 "trtype": "TCP", 00:29:39.714 "adrfam": "IPv4", 00:29:39.714 "traddr": "10.0.0.2", 00:29:39.714 "trsvcid": "4420" 00:29:39.714 } 00:29:39.714 ], 00:29:39.714 "allow_any_host": true, 00:29:39.714 "hosts": [], 00:29:39.714 "serial_number": "SPDK00000000000001", 00:29:39.714 "model_number": "SPDK bdev Controller", 00:29:39.714 "max_namespaces": 2, 00:29:39.714 "min_cntlid": 1, 00:29:39.714 "max_cntlid": 65519, 00:29:39.714 "namespaces": [ 00:29:39.714 { 00:29:39.714 "nsid": 1, 00:29:39.714 "bdev_name": "Malloc0", 00:29:39.714 "name": "Malloc0", 00:29:39.714 "nguid": "08320AD2B0E24756A8E5161758518380", 00:29:39.714 "uuid": "08320ad2-b0e2-4756-a8e5-161758518380" 00:29:39.714 }, 00:29:39.714 { 00:29:39.714 "nsid": 2, 00:29:39.714 "bdev_name": "Malloc1", 00:29:39.714 "name": "Malloc1", 00:29:39.714 "nguid": "2B5B20B4D5AE487D85D9566010B65D44", 00:29:39.714 "uuid": "2b5b20b4-d5ae-487d-85d9-566010b65d44" 00:29:39.714 } 00:29:39.714 ] 00:29:39.714 } 00:29:39.714 ] 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.714 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1467698 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.715 rmmod nvme_tcp 00:29:39.715 rmmod nvme_fabrics 00:29:39.715 rmmod nvme_keyring 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 1467669 ']' 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 1467669 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1467669 ']' 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1467669 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467669 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467669' 00:29:39.715 killing process with pid 1467669 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1467669 00:29:39.715 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1467669 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.973 14:45:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.512 00:29:42.512 real 0m5.410s 00:29:42.512 user 0m4.253s 00:29:42.512 sys 0m1.914s 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.512 ************************************ 00:29:42.512 END TEST nvmf_aer 00:29:42.512 ************************************ 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.512 ************************************ 00:29:42.512 START TEST nvmf_async_init 00:29:42.512 ************************************ 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:42.512 * Looking for test storage... 00:29:42.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:42.512 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.513 --rc genhtml_branch_coverage=1 00:29:42.513 --rc genhtml_function_coverage=1 00:29:42.513 --rc genhtml_legend=1 00:29:42.513 --rc geninfo_all_blocks=1 00:29:42.513 --rc geninfo_unexecuted_blocks=1 00:29:42.513 00:29:42.513 ' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.513 --rc genhtml_branch_coverage=1 00:29:42.513 --rc genhtml_function_coverage=1 00:29:42.513 --rc genhtml_legend=1 00:29:42.513 --rc geninfo_all_blocks=1 00:29:42.513 --rc geninfo_unexecuted_blocks=1 00:29:42.513 00:29:42.513 ' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.513 --rc genhtml_branch_coverage=1 00:29:42.513 --rc genhtml_function_coverage=1 00:29:42.513 --rc genhtml_legend=1 00:29:42.513 --rc geninfo_all_blocks=1 00:29:42.513 --rc geninfo_unexecuted_blocks=1 00:29:42.513 00:29:42.513 ' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.513 --rc genhtml_branch_coverage=1 00:29:42.513 --rc genhtml_function_coverage=1 00:29:42.513 --rc genhtml_legend=1 00:29:42.513 --rc geninfo_all_blocks=1 00:29:42.513 --rc geninfo_unexecuted_blocks=1 00:29:42.513 00:29:42.513 ' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=40c4a1994ad14bc99353f6da2f9ce64f 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.513 14:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:44.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:44.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:44.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:44.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:29:44.416 00:29:44.416 --- 10.0.0.2 ping statistics --- 00:29:44.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.416 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:44.416 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:29:44.416 00:29:44.416 --- 10.0.0.1 ping statistics --- 00:29:44.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.417 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=1469762 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 1469762 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1469762 ']' 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.417 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.417 [2024-11-02 14:45:36.395933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:44.417 [2024-11-02 14:45:36.396015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.417 [2024-11-02 14:45:36.460561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.677 [2024-11-02 14:45:36.550940] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.677 [2024-11-02 14:45:36.551007] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.677 [2024-11-02 14:45:36.551025] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.677 [2024-11-02 14:45:36.551038] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.677 [2024-11-02 14:45:36.551049] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.677 [2024-11-02 14:45:36.551088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 [2024-11-02 14:45:36.703080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 null0 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.677 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 40c4a1994ad14bc99353f6da2f9ce64f 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.938 [2024-11-02 14:45:36.743398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.938 nvme0n1 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.938 [ 00:29:44.938 { 00:29:44.938 "name": "nvme0n1", 00:29:44.938 "aliases": [ 00:29:44.938 "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f" 00:29:44.938 ], 00:29:44.938 "product_name": "NVMe disk", 00:29:44.938 "block_size": 512, 00:29:44.938 "num_blocks": 2097152, 00:29:44.938 "uuid": "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f", 00:29:44.938 "numa_id": 0, 00:29:44.938 "assigned_rate_limits": { 00:29:44.938 "rw_ios_per_sec": 0, 00:29:44.938 "rw_mbytes_per_sec": 0, 00:29:44.938 "r_mbytes_per_sec": 0, 00:29:44.938 "w_mbytes_per_sec": 0 00:29:44.938 }, 00:29:44.938 "claimed": false, 00:29:44.938 "zoned": false, 00:29:44.938 "supported_io_types": { 00:29:44.938 "read": true, 00:29:44.938 "write": true, 00:29:44.938 "unmap": false, 00:29:44.938 "flush": true, 00:29:44.938 "reset": true, 00:29:44.938 "nvme_admin": true, 00:29:44.938 "nvme_io": true, 00:29:44.938 "nvme_io_md": false, 00:29:44.938 "write_zeroes": true, 00:29:44.938 "zcopy": false, 00:29:44.938 "get_zone_info": false, 00:29:44.938 "zone_management": false, 00:29:44.938 "zone_append": false, 00:29:44.938 "compare": true, 00:29:44.938 "compare_and_write": true, 00:29:44.938 "abort": true, 00:29:44.938 "seek_hole": false, 00:29:44.938 "seek_data": false, 00:29:44.938 "copy": true, 00:29:44.938 "nvme_iov_md": false 00:29:44.938 }, 00:29:44.938 "memory_domains": [ 00:29:44.938 { 00:29:44.938 "dma_device_id": "system", 00:29:44.938 "dma_device_type": 1 00:29:44.938 } 00:29:44.938 ], 00:29:44.938 "driver_specific": { 00:29:44.938 "nvme": [ 00:29:44.938 { 00:29:44.938 "trid": { 00:29:44.938 "trtype": "TCP", 00:29:44.938 "adrfam": "IPv4", 00:29:44.938 "traddr": "10.0.0.2", 00:29:44.938 "trsvcid": "4420", 00:29:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:44.938 }, 00:29:44.938 "ctrlr_data": { 00:29:44.938 "cntlid": 1, 00:29:44.938 "vendor_id": "0x8086", 00:29:44.938 "model_number": "SPDK bdev Controller", 00:29:44.938 "serial_number": "00000000000000000000", 00:29:44.938 "firmware_revision": "24.09.1", 00:29:44.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.938 "oacs": { 00:29:44.938 "security": 0, 00:29:44.938 "format": 0, 00:29:44.938 "firmware": 0, 00:29:44.938 "ns_manage": 0 00:29:44.938 }, 00:29:44.938 "multi_ctrlr": true, 00:29:44.938 "ana_reporting": false 00:29:44.938 }, 00:29:44.938 "vs": { 00:29:44.938 "nvme_version": "1.3" 00:29:44.938 }, 00:29:44.938 "ns_data": { 00:29:44.938 "id": 1, 00:29:44.938 "can_share": true 00:29:44.938 } 00:29:44.938 } 00:29:44.938 ], 00:29:44.938 "mp_policy": "active_passive" 00:29:44.938 } 00:29:44.938 } 00:29:44.938 ] 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.938 14:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.938 [2024-11-02 14:45:36.992027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.938 [2024-11-02 14:45:36.992120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca1aa0 (9): Bad file descriptor 00:29:45.197 [2024-11-02 14:45:37.124449] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.197 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.197 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:45.197 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.197 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.197 [ 00:29:45.197 { 00:29:45.197 "name": "nvme0n1", 00:29:45.197 "aliases": [ 00:29:45.197 "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f" 00:29:45.197 ], 00:29:45.197 "product_name": "NVMe disk", 00:29:45.197 "block_size": 512, 00:29:45.197 "num_blocks": 2097152, 00:29:45.197 "uuid": "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f", 00:29:45.197 "numa_id": 0, 00:29:45.197 "assigned_rate_limits": { 00:29:45.197 "rw_ios_per_sec": 0, 00:29:45.197 "rw_mbytes_per_sec": 0, 00:29:45.197 "r_mbytes_per_sec": 0, 00:29:45.197 "w_mbytes_per_sec": 0 00:29:45.197 }, 00:29:45.197 "claimed": false, 00:29:45.197 "zoned": false, 00:29:45.197 "supported_io_types": { 00:29:45.197 "read": true, 00:29:45.197 "write": true, 00:29:45.197 "unmap": false, 00:29:45.197 "flush": true, 00:29:45.197 "reset": true, 00:29:45.197 "nvme_admin": true, 00:29:45.197 "nvme_io": true, 00:29:45.197 "nvme_io_md": false, 00:29:45.197 "write_zeroes": true, 00:29:45.197 "zcopy": false, 00:29:45.197 "get_zone_info": false, 00:29:45.197 "zone_management": false, 00:29:45.197 "zone_append": false, 00:29:45.197 "compare": true, 00:29:45.197 "compare_and_write": true, 00:29:45.197 "abort": true, 00:29:45.197 "seek_hole": false, 00:29:45.197 "seek_data": false, 00:29:45.197 "copy": true, 00:29:45.197 "nvme_iov_md": false 00:29:45.197 }, 00:29:45.197 "memory_domains": [ 00:29:45.197 { 00:29:45.197 "dma_device_id": "system", 00:29:45.197 "dma_device_type": 1 00:29:45.197 } 00:29:45.197 ], 00:29:45.197 "driver_specific": { 00:29:45.197 "nvme": [ 00:29:45.197 { 00:29:45.197 "trid": { 00:29:45.197 "trtype": "TCP", 00:29:45.197 "adrfam": "IPv4", 00:29:45.197 "traddr": "10.0.0.2", 00:29:45.197 "trsvcid": "4420", 00:29:45.197 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:45.197 }, 00:29:45.197 "ctrlr_data": { 00:29:45.197 "cntlid": 2, 00:29:45.197 "vendor_id": "0x8086", 00:29:45.197 "model_number": "SPDK bdev Controller", 00:29:45.197 "serial_number": "00000000000000000000", 00:29:45.197 "firmware_revision": "24.09.1", 00:29:45.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.197 "oacs": { 00:29:45.197 "security": 0, 00:29:45.197 "format": 0, 00:29:45.197 "firmware": 0, 00:29:45.197 "ns_manage": 0 00:29:45.197 }, 00:29:45.197 "multi_ctrlr": true, 00:29:45.197 "ana_reporting": false 00:29:45.197 }, 00:29:45.197 "vs": { 00:29:45.197 "nvme_version": "1.3" 00:29:45.197 }, 00:29:45.197 "ns_data": { 00:29:45.197 "id": 1, 00:29:45.197 "can_share": true 00:29:45.197 } 00:29:45.197 } 00:29:45.198 ], 00:29:45.198 "mp_policy": "active_passive" 00:29:45.198 } 00:29:45.198 } 00:29:45.198 ] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.VPJyXzjJCX 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.VPJyXzjJCX 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.VPJyXzjJCX 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 [2024-11-02 14:45:37.180674] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:45.198 [2024-11-02 14:45:37.180816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.198 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.198 [2024-11-02 14:45:37.196739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:45.458 nvme0n1 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.458 [ 00:29:45.458 { 00:29:45.458 "name": "nvme0n1", 00:29:45.458 "aliases": [ 00:29:45.458 "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f" 00:29:45.458 ], 00:29:45.458 "product_name": "NVMe disk", 00:29:45.458 "block_size": 512, 00:29:45.458 "num_blocks": 2097152, 00:29:45.458 "uuid": "40c4a199-4ad1-4bc9-9353-f6da2f9ce64f", 00:29:45.458 "numa_id": 0, 00:29:45.458 "assigned_rate_limits": { 00:29:45.458 "rw_ios_per_sec": 0, 00:29:45.458 "rw_mbytes_per_sec": 0, 00:29:45.458 "r_mbytes_per_sec": 0, 00:29:45.458 "w_mbytes_per_sec": 0 00:29:45.458 }, 00:29:45.458 "claimed": false, 00:29:45.458 "zoned": false, 00:29:45.458 "supported_io_types": { 00:29:45.458 "read": true, 00:29:45.458 "write": true, 00:29:45.458 "unmap": false, 00:29:45.458 "flush": true, 00:29:45.458 "reset": true, 00:29:45.458 "nvme_admin": true, 00:29:45.458 "nvme_io": true, 00:29:45.458 "nvme_io_md": false, 00:29:45.458 "write_zeroes": true, 00:29:45.458 "zcopy": false, 00:29:45.458 "get_zone_info": false, 00:29:45.458 "zone_management": false, 00:29:45.458 "zone_append": false, 00:29:45.458 "compare": true, 00:29:45.458 "compare_and_write": true, 00:29:45.458 "abort": true, 00:29:45.458 "seek_hole": false, 00:29:45.458 "seek_data": false, 00:29:45.458 "copy": true, 00:29:45.458 "nvme_iov_md": false 00:29:45.458 }, 00:29:45.458 "memory_domains": [ 00:29:45.458 { 00:29:45.458 "dma_device_id": "system", 00:29:45.458 "dma_device_type": 1 00:29:45.458 } 00:29:45.458 ], 00:29:45.458 "driver_specific": { 00:29:45.458 "nvme": [ 00:29:45.458 { 00:29:45.458 "trid": { 00:29:45.458 "trtype": "TCP", 00:29:45.458 "adrfam": "IPv4", 00:29:45.458 "traddr": "10.0.0.2", 00:29:45.458 "trsvcid": "4421", 00:29:45.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:45.458 }, 00:29:45.458 "ctrlr_data": { 00:29:45.458 "cntlid": 3, 00:29:45.458 "vendor_id": "0x8086", 00:29:45.458 "model_number": "SPDK bdev Controller", 00:29:45.458 "serial_number": "00000000000000000000", 00:29:45.458 "firmware_revision": "24.09.1", 00:29:45.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.458 "oacs": { 00:29:45.458 "security": 0, 00:29:45.458 "format": 0, 00:29:45.458 "firmware": 0, 00:29:45.458 "ns_manage": 0 00:29:45.458 }, 00:29:45.458 "multi_ctrlr": true, 00:29:45.458 "ana_reporting": false 00:29:45.458 }, 00:29:45.458 "vs": { 00:29:45.458 "nvme_version": "1.3" 00:29:45.458 }, 00:29:45.458 "ns_data": { 00:29:45.458 "id": 1, 00:29:45.458 "can_share": true 00:29:45.458 } 00:29:45.458 } 00:29:45.458 ], 00:29:45.458 "mp_policy": "active_passive" 00:29:45.458 } 00:29:45.458 } 00:29:45.458 ] 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.458 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.VPJyXzjJCX 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.459 rmmod nvme_tcp 00:29:45.459 rmmod nvme_fabrics 00:29:45.459 rmmod nvme_keyring 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 1469762 ']' 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 1469762 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1469762 ']' 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1469762 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469762 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469762' 00:29:45.459 killing process with pid 1469762 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1469762 00:29:45.459 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1469762 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.718 14:45:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.622 14:45:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.622 00:29:47.622 real 0m5.611s 00:29:47.622 user 0m2.141s 00:29:47.622 sys 0m1.899s 00:29:47.622 14:45:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.622 14:45:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.622 ************************************ 00:29:47.622 END TEST nvmf_async_init 00:29:47.622 ************************************ 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.880 ************************************ 00:29:47.880 START TEST dma 00:29:47.880 ************************************ 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:47.880 * Looking for test storage... 00:29:47.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.880 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.881 --rc genhtml_branch_coverage=1 00:29:47.881 --rc genhtml_function_coverage=1 00:29:47.881 --rc genhtml_legend=1 00:29:47.881 --rc geninfo_all_blocks=1 00:29:47.881 --rc geninfo_unexecuted_blocks=1 00:29:47.881 00:29:47.881 ' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.881 --rc genhtml_branch_coverage=1 00:29:47.881 --rc genhtml_function_coverage=1 00:29:47.881 --rc genhtml_legend=1 00:29:47.881 --rc geninfo_all_blocks=1 00:29:47.881 --rc geninfo_unexecuted_blocks=1 00:29:47.881 00:29:47.881 ' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.881 --rc genhtml_branch_coverage=1 00:29:47.881 --rc genhtml_function_coverage=1 00:29:47.881 --rc genhtml_legend=1 00:29:47.881 --rc geninfo_all_blocks=1 00:29:47.881 --rc geninfo_unexecuted_blocks=1 00:29:47.881 00:29:47.881 ' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.881 --rc genhtml_branch_coverage=1 00:29:47.881 --rc genhtml_function_coverage=1 00:29:47.881 --rc genhtml_legend=1 00:29:47.881 --rc geninfo_all_blocks=1 00:29:47.881 --rc geninfo_unexecuted_blocks=1 00:29:47.881 00:29:47.881 ' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:47.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:47.881 00:29:47.881 real 0m0.167s 00:29:47.881 user 0m0.111s 00:29:47.881 sys 0m0.065s 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:47.881 ************************************ 00:29:47.881 END TEST dma 00:29:47.881 ************************************ 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.881 ************************************ 00:29:47.881 START TEST nvmf_identify 00:29:47.881 ************************************ 00:29:47.881 14:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:48.140 * Looking for test storage... 00:29:48.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.140 14:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:48.140 14:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:29:48.140 14:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:48.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.140 --rc genhtml_branch_coverage=1 00:29:48.140 --rc genhtml_function_coverage=1 00:29:48.140 --rc genhtml_legend=1 00:29:48.140 --rc geninfo_all_blocks=1 00:29:48.140 --rc geninfo_unexecuted_blocks=1 00:29:48.140 00:29:48.140 ' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:48.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.140 --rc genhtml_branch_coverage=1 00:29:48.140 --rc genhtml_function_coverage=1 00:29:48.140 --rc genhtml_legend=1 00:29:48.140 --rc geninfo_all_blocks=1 00:29:48.140 --rc geninfo_unexecuted_blocks=1 00:29:48.140 00:29:48.140 ' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:48.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.140 --rc genhtml_branch_coverage=1 00:29:48.140 --rc genhtml_function_coverage=1 00:29:48.140 --rc genhtml_legend=1 00:29:48.140 --rc geninfo_all_blocks=1 00:29:48.140 --rc geninfo_unexecuted_blocks=1 00:29:48.140 00:29:48.140 ' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:48.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.140 --rc genhtml_branch_coverage=1 00:29:48.140 --rc genhtml_function_coverage=1 00:29:48.140 --rc genhtml_legend=1 00:29:48.140 --rc geninfo_all_blocks=1 00:29:48.140 --rc geninfo_unexecuted_blocks=1 00:29:48.140 00:29:48.140 ' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.140 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.141 14:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.674 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:29:50.675 00:29:50.675 --- 10.0.0.2 ping statistics --- 00:29:50.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.675 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:50.675 00:29:50.675 --- 10.0.0.1 ping statistics --- 00:29:50.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.675 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1471904 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1471904 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1471904 ']' 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.675 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.675 [2024-11-02 14:45:42.432754] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:50.675 [2024-11-02 14:45:42.432850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.675 [2024-11-02 14:45:42.506341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.676 [2024-11-02 14:45:42.599068] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.676 [2024-11-02 14:45:42.599128] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.676 [2024-11-02 14:45:42.599150] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.676 [2024-11-02 14:45:42.599166] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.676 [2024-11-02 14:45:42.599181] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.676 [2024-11-02 14:45:42.599291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.676 [2024-11-02 14:45:42.599353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.676 [2024-11-02 14:45:42.599414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.676 [2024-11-02 14:45:42.599419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.676 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.676 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 [2024-11-02 14:45:42.731991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 Malloc0 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 [2024-11-02 14:45:42.813302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.937 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.937 [ 00:29:50.937 { 00:29:50.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:50.937 "subtype": "Discovery", 00:29:50.937 "listen_addresses": [ 00:29:50.937 { 00:29:50.937 "trtype": "TCP", 00:29:50.937 "adrfam": "IPv4", 00:29:50.937 "traddr": "10.0.0.2", 00:29:50.937 "trsvcid": "4420" 00:29:50.937 } 00:29:50.937 ], 00:29:50.937 "allow_any_host": true, 00:29:50.937 "hosts": [] 00:29:50.937 }, 00:29:50.937 { 00:29:50.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.937 "subtype": "NVMe", 00:29:50.937 "listen_addresses": [ 00:29:50.937 { 00:29:50.937 "trtype": "TCP", 00:29:50.937 "adrfam": "IPv4", 00:29:50.937 "traddr": "10.0.0.2", 00:29:50.937 "trsvcid": "4420" 00:29:50.937 } 00:29:50.937 ], 00:29:50.937 "allow_any_host": true, 00:29:50.937 "hosts": [], 00:29:50.937 "serial_number": "SPDK00000000000001", 00:29:50.937 "model_number": "SPDK bdev Controller", 00:29:50.937 "max_namespaces": 32, 00:29:50.937 "min_cntlid": 1, 00:29:50.938 "max_cntlid": 65519, 00:29:50.938 "namespaces": [ 00:29:50.938 { 00:29:50.938 "nsid": 1, 00:29:50.938 "bdev_name": "Malloc0", 00:29:50.938 "name": "Malloc0", 00:29:50.938 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:50.938 "eui64": "ABCDEF0123456789", 00:29:50.938 "uuid": "f6540a69-5c66-4557-a576-74d354af230e" 00:29:50.938 } 00:29:50.938 ] 00:29:50.938 } 00:29:50.938 ] 00:29:50.938 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.938 14:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:50.938 [2024-11-02 14:45:42.855483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:50.938 [2024-11-02 14:45:42.855527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471942 ] 00:29:50.938 [2024-11-02 14:45:42.888634] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:50.938 [2024-11-02 14:45:42.888696] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:50.938 [2024-11-02 14:45:42.888706] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:50.938 [2024-11-02 14:45:42.888724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:50.938 [2024-11-02 14:45:42.888741] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:50.938 [2024-11-02 14:45:42.892721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:50.938 [2024-11-02 14:45:42.892790] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1862210 0 00:29:50.938 [2024-11-02 14:45:42.900290] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:50.938 [2024-11-02 14:45:42.900313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:50.938 [2024-11-02 14:45:42.900323] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:50.938 [2024-11-02 14:45:42.900329] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:50.938 [2024-11-02 14:45:42.900384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.900399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.900407] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.900427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:50.938 [2024-11-02 14:45:42.900455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.908285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.908303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.908310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908318] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.908336] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:50.938 [2024-11-02 14:45:42.908362] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:50.938 [2024-11-02 14:45:42.908372] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:50.938 [2024-11-02 14:45:42.908396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.908423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.908448] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.908587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.908600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.908607] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.908624] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:50.938 [2024-11-02 14:45:42.908636] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:50.938 [2024-11-02 14:45:42.908649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.908674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.908695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.908817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.908832] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.908839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.908856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:50.938 [2024-11-02 14:45:42.908870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.908883] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.908897] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.908907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.908928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.909038] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.909050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.909056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909063] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.909072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.909096] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.909123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.909143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.909266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.909280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.909287] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.909308] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:50.938 [2024-11-02 14:45:42.909317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.909330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.909441] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:50.938 [2024-11-02 14:45:42.909450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.909466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909474] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.909491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.909513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.909632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.909647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.909654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909660] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.909669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:50.938 [2024-11-02 14:45:42.909685] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909700] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.938 [2024-11-02 14:45:42.909711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.938 [2024-11-02 14:45:42.909732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.938 [2024-11-02 14:45:42.909847] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.938 [2024-11-02 14:45:42.909862] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.938 [2024-11-02 14:45:42.909868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.938 [2024-11-02 14:45:42.909875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.938 [2024-11-02 14:45:42.909882] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:50.938 [2024-11-02 14:45:42.909891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.909904] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:50.939 [2024-11-02 14:45:42.909918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.909936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.909944] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.909958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.939 [2024-11-02 14:45:42.909980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.939 [2024-11-02 14:45:42.910144] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.939 [2024-11-02 14:45:42.910156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.939 [2024-11-02 14:45:42.910162] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.910169] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1862210): datao=0, datal=4096, cccid=0 00:29:50.939 [2024-11-02 14:45:42.910177] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18cc440) on tqpair(0x1862210): expected_datao=0, payload_size=4096 00:29:50.939 [2024-11-02 14:45:42.910185] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.910203] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.910213] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.939 [2024-11-02 14:45:42.950384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.939 [2024-11-02 14:45:42.950391] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.939 [2024-11-02 14:45:42.950412] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:50.939 [2024-11-02 14:45:42.950422] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:50.939 [2024-11-02 14:45:42.950430] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:50.939 [2024-11-02 14:45:42.950439] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:50.939 [2024-11-02 14:45:42.950448] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:50.939 [2024-11-02 14:45:42.950457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.950471] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.950485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950492] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.939 [2024-11-02 14:45:42.950534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.939 [2024-11-02 14:45:42.950655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.939 [2024-11-02 14:45:42.950670] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.939 [2024-11-02 14:45:42.950677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950684] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:50.939 [2024-11-02 14:45:42.950698] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950712] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.939 [2024-11-02 14:45:42.950737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950746] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950752] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.939 [2024-11-02 14:45:42.950771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950785] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.939 [2024-11-02 14:45:42.950804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.939 [2024-11-02 14:45:42.950835] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.950856] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:50.939 [2024-11-02 14:45:42.950870] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.950878] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.950889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.939 [2024-11-02 14:45:42.950928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc440, cid 0, qid 0 00:29:50.939 [2024-11-02 14:45:42.950939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc5c0, cid 1, qid 0 00:29:50.939 [2024-11-02 14:45:42.950948] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc740, cid 2, qid 0 00:29:50.939 [2024-11-02 14:45:42.950955] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:50.939 [2024-11-02 14:45:42.950978] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cca40, cid 4, qid 0 00:29:50.939 [2024-11-02 14:45:42.951123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.939 [2024-11-02 14:45:42.951138] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.939 [2024-11-02 14:45:42.951146] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.951152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cca40) on tqpair=0x1862210 00:29:50.939 [2024-11-02 14:45:42.951163] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:50.939 [2024-11-02 14:45:42.951172] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:50.939 [2024-11-02 14:45:42.951189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.951199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.951210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.939 [2024-11-02 14:45:42.951231] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cca40, cid 4, qid 0 00:29:50.939 [2024-11-02 14:45:42.955284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.939 [2024-11-02 14:45:42.955301] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.939 [2024-11-02 14:45:42.955308] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955314] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1862210): datao=0, datal=4096, cccid=4 00:29:50.939 [2024-11-02 14:45:42.955322] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18cca40) on tqpair(0x1862210): expected_datao=0, payload_size=4096 00:29:50.939 [2024-11-02 14:45:42.955329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955339] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955347] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.939 [2024-11-02 14:45:42.955364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.939 [2024-11-02 14:45:42.955370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cca40) on tqpair=0x1862210 00:29:50.939 [2024-11-02 14:45:42.955412] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:50.939 [2024-11-02 14:45:42.955455] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.955477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.939 [2024-11-02 14:45:42.955490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955497] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955504] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1862210) 00:29:50.939 [2024-11-02 14:45:42.955513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.939 [2024-11-02 14:45:42.955537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cca40, cid 4, qid 0 00:29:50.939 [2024-11-02 14:45:42.955548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ccbc0, cid 5, qid 0 00:29:50.939 [2024-11-02 14:45:42.955722] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.939 [2024-11-02 14:45:42.955737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.939 [2024-11-02 14:45:42.955745] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955751] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1862210): datao=0, datal=1024, cccid=4 00:29:50.939 [2024-11-02 14:45:42.955759] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18cca40) on tqpair(0x1862210): expected_datao=0, payload_size=1024 00:29:50.939 [2024-11-02 14:45:42.955767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955776] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955784] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.939 [2024-11-02 14:45:42.955792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.940 [2024-11-02 14:45:42.955801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.940 [2024-11-02 14:45:42.955807] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.940 [2024-11-02 14:45:42.955814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ccbc0) on tqpair=0x1862210 00:29:51.203 [2024-11-02 14:45:42.996386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.203 [2024-11-02 14:45:42.996408] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.203 [2024-11-02 14:45:42.996416] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:42.996428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cca40) on tqpair=0x1862210 00:29:51.203 [2024-11-02 14:45:42.996452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:42.996471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1862210) 00:29:51.203 [2024-11-02 14:45:42.996485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.203 [2024-11-02 14:45:42.996518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cca40, cid 4, qid 0 00:29:51.203 [2024-11-02 14:45:42.996660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.203 [2024-11-02 14:45:42.996677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.203 [2024-11-02 14:45:42.996687] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:42.996694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1862210): datao=0, datal=3072, cccid=4 00:29:51.203 [2024-11-02 14:45:42.996702] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18cca40) on tqpair(0x1862210): expected_datao=0, payload_size=3072 00:29:51.203 [2024-11-02 14:45:42.996709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:42.996731] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:42.996741] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037384] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.203 [2024-11-02 14:45:43.037404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.203 [2024-11-02 14:45:43.037412] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037419] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cca40) on tqpair=0x1862210 00:29:51.203 [2024-11-02 14:45:43.037435] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037443] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1862210) 00:29:51.203 [2024-11-02 14:45:43.037455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.203 [2024-11-02 14:45:43.037485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cca40, cid 4, qid 0 00:29:51.203 [2024-11-02 14:45:43.037635] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.203 [2024-11-02 14:45:43.037650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.203 [2024-11-02 14:45:43.037657] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1862210): datao=0, datal=8, cccid=4 00:29:51.203 [2024-11-02 14:45:43.037672] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18cca40) on tqpair(0x1862210): expected_datao=0, payload_size=8 00:29:51.203 [2024-11-02 14:45:43.037679] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037689] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.037696] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.081272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.203 [2024-11-02 14:45:43.081290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.203 [2024-11-02 14:45:43.081312] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.203 [2024-11-02 14:45:43.081320] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cca40) on tqpair=0x1862210 00:29:51.203 ===================================================== 00:29:51.203 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:51.203 ===================================================== 00:29:51.203 Controller Capabilities/Features 00:29:51.203 ================================ 00:29:51.203 Vendor ID: 0000 00:29:51.203 Subsystem Vendor ID: 0000 00:29:51.203 Serial Number: .................... 00:29:51.203 Model Number: ........................................ 00:29:51.203 Firmware Version: 24.09.1 00:29:51.203 Recommended Arb Burst: 0 00:29:51.203 IEEE OUI Identifier: 00 00 00 00:29:51.203 Multi-path I/O 00:29:51.203 May have multiple subsystem ports: No 00:29:51.203 May have multiple controllers: No 00:29:51.203 Associated with SR-IOV VF: No 00:29:51.203 Max Data Transfer Size: 131072 00:29:51.203 Max Number of Namespaces: 0 00:29:51.203 Max Number of I/O Queues: 1024 00:29:51.203 NVMe Specification Version (VS): 1.3 00:29:51.203 NVMe Specification Version (Identify): 1.3 00:29:51.203 Maximum Queue Entries: 128 00:29:51.203 Contiguous Queues Required: Yes 00:29:51.203 Arbitration Mechanisms Supported 00:29:51.203 Weighted Round Robin: Not Supported 00:29:51.203 Vendor Specific: Not Supported 00:29:51.203 Reset Timeout: 15000 ms 00:29:51.203 Doorbell Stride: 4 bytes 00:29:51.203 NVM Subsystem Reset: Not Supported 00:29:51.203 Command Sets Supported 00:29:51.203 NVM Command Set: Supported 00:29:51.203 Boot Partition: Not Supported 00:29:51.203 Memory Page Size Minimum: 4096 bytes 00:29:51.203 Memory Page Size Maximum: 4096 bytes 00:29:51.203 Persistent Memory Region: Not Supported 00:29:51.203 Optional Asynchronous Events Supported 00:29:51.203 Namespace Attribute Notices: Not Supported 00:29:51.203 Firmware Activation Notices: Not Supported 00:29:51.203 ANA Change Notices: Not Supported 00:29:51.203 PLE Aggregate Log Change Notices: Not Supported 00:29:51.203 LBA Status Info Alert Notices: Not Supported 00:29:51.203 EGE Aggregate Log Change Notices: Not Supported 00:29:51.203 Normal NVM Subsystem Shutdown event: Not Supported 00:29:51.203 Zone Descriptor Change Notices: Not Supported 00:29:51.203 Discovery Log Change Notices: Supported 00:29:51.203 Controller Attributes 00:29:51.203 128-bit Host Identifier: Not Supported 00:29:51.203 Non-Operational Permissive Mode: Not Supported 00:29:51.203 NVM Sets: Not Supported 00:29:51.203 Read Recovery Levels: Not Supported 00:29:51.203 Endurance Groups: Not Supported 00:29:51.203 Predictable Latency Mode: Not Supported 00:29:51.203 Traffic Based Keep ALive: Not Supported 00:29:51.203 Namespace Granularity: Not Supported 00:29:51.203 SQ Associations: Not Supported 00:29:51.203 UUID List: Not Supported 00:29:51.203 Multi-Domain Subsystem: Not Supported 00:29:51.203 Fixed Capacity Management: Not Supported 00:29:51.203 Variable Capacity Management: Not Supported 00:29:51.204 Delete Endurance Group: Not Supported 00:29:51.204 Delete NVM Set: Not Supported 00:29:51.204 Extended LBA Formats Supported: Not Supported 00:29:51.204 Flexible Data Placement Supported: Not Supported 00:29:51.204 00:29:51.204 Controller Memory Buffer Support 00:29:51.204 ================================ 00:29:51.204 Supported: No 00:29:51.204 00:29:51.204 Persistent Memory Region Support 00:29:51.204 ================================ 00:29:51.204 Supported: No 00:29:51.204 00:29:51.204 Admin Command Set Attributes 00:29:51.204 ============================ 00:29:51.204 Security Send/Receive: Not Supported 00:29:51.204 Format NVM: Not Supported 00:29:51.204 Firmware Activate/Download: Not Supported 00:29:51.204 Namespace Management: Not Supported 00:29:51.204 Device Self-Test: Not Supported 00:29:51.204 Directives: Not Supported 00:29:51.204 NVMe-MI: Not Supported 00:29:51.204 Virtualization Management: Not Supported 00:29:51.204 Doorbell Buffer Config: Not Supported 00:29:51.204 Get LBA Status Capability: Not Supported 00:29:51.204 Command & Feature Lockdown Capability: Not Supported 00:29:51.204 Abort Command Limit: 1 00:29:51.204 Async Event Request Limit: 4 00:29:51.204 Number of Firmware Slots: N/A 00:29:51.204 Firmware Slot 1 Read-Only: N/A 00:29:51.204 Firmware Activation Without Reset: N/A 00:29:51.204 Multiple Update Detection Support: N/A 00:29:51.204 Firmware Update Granularity: No Information Provided 00:29:51.204 Per-Namespace SMART Log: No 00:29:51.204 Asymmetric Namespace Access Log Page: Not Supported 00:29:51.204 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:51.204 Command Effects Log Page: Not Supported 00:29:51.204 Get Log Page Extended Data: Supported 00:29:51.204 Telemetry Log Pages: Not Supported 00:29:51.204 Persistent Event Log Pages: Not Supported 00:29:51.204 Supported Log Pages Log Page: May Support 00:29:51.204 Commands Supported & Effects Log Page: Not Supported 00:29:51.204 Feature Identifiers & Effects Log Page:May Support 00:29:51.204 NVMe-MI Commands & Effects Log Page: May Support 00:29:51.204 Data Area 4 for Telemetry Log: Not Supported 00:29:51.204 Error Log Page Entries Supported: 128 00:29:51.204 Keep Alive: Not Supported 00:29:51.204 00:29:51.204 NVM Command Set Attributes 00:29:51.204 ========================== 00:29:51.204 Submission Queue Entry Size 00:29:51.204 Max: 1 00:29:51.204 Min: 1 00:29:51.204 Completion Queue Entry Size 00:29:51.204 Max: 1 00:29:51.204 Min: 1 00:29:51.204 Number of Namespaces: 0 00:29:51.204 Compare Command: Not Supported 00:29:51.204 Write Uncorrectable Command: Not Supported 00:29:51.204 Dataset Management Command: Not Supported 00:29:51.204 Write Zeroes Command: Not Supported 00:29:51.204 Set Features Save Field: Not Supported 00:29:51.204 Reservations: Not Supported 00:29:51.204 Timestamp: Not Supported 00:29:51.204 Copy: Not Supported 00:29:51.204 Volatile Write Cache: Not Present 00:29:51.204 Atomic Write Unit (Normal): 1 00:29:51.204 Atomic Write Unit (PFail): 1 00:29:51.204 Atomic Compare & Write Unit: 1 00:29:51.204 Fused Compare & Write: Supported 00:29:51.204 Scatter-Gather List 00:29:51.204 SGL Command Set: Supported 00:29:51.204 SGL Keyed: Supported 00:29:51.204 SGL Bit Bucket Descriptor: Not Supported 00:29:51.204 SGL Metadata Pointer: Not Supported 00:29:51.204 Oversized SGL: Not Supported 00:29:51.204 SGL Metadata Address: Not Supported 00:29:51.204 SGL Offset: Supported 00:29:51.204 Transport SGL Data Block: Not Supported 00:29:51.204 Replay Protected Memory Block: Not Supported 00:29:51.204 00:29:51.204 Firmware Slot Information 00:29:51.204 ========================= 00:29:51.204 Active slot: 0 00:29:51.204 00:29:51.204 00:29:51.204 Error Log 00:29:51.204 ========= 00:29:51.204 00:29:51.204 Active Namespaces 00:29:51.204 ================= 00:29:51.204 Discovery Log Page 00:29:51.204 ================== 00:29:51.204 Generation Counter: 2 00:29:51.204 Number of Records: 2 00:29:51.204 Record Format: 0 00:29:51.204 00:29:51.204 Discovery Log Entry 0 00:29:51.204 ---------------------- 00:29:51.204 Transport Type: 3 (TCP) 00:29:51.204 Address Family: 1 (IPv4) 00:29:51.204 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:51.204 Entry Flags: 00:29:51.204 Duplicate Returned Information: 1 00:29:51.204 Explicit Persistent Connection Support for Discovery: 1 00:29:51.204 Transport Requirements: 00:29:51.204 Secure Channel: Not Required 00:29:51.204 Port ID: 0 (0x0000) 00:29:51.204 Controller ID: 65535 (0xffff) 00:29:51.204 Admin Max SQ Size: 128 00:29:51.204 Transport Service Identifier: 4420 00:29:51.204 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:51.204 Transport Address: 10.0.0.2 00:29:51.204 Discovery Log Entry 1 00:29:51.204 ---------------------- 00:29:51.204 Transport Type: 3 (TCP) 00:29:51.204 Address Family: 1 (IPv4) 00:29:51.204 Subsystem Type: 2 (NVM Subsystem) 00:29:51.204 Entry Flags: 00:29:51.204 Duplicate Returned Information: 0 00:29:51.204 Explicit Persistent Connection Support for Discovery: 0 00:29:51.204 Transport Requirements: 00:29:51.204 Secure Channel: Not Required 00:29:51.204 Port ID: 0 (0x0000) 00:29:51.204 Controller ID: 65535 (0xffff) 00:29:51.204 Admin Max SQ Size: 128 00:29:51.204 Transport Service Identifier: 4420 00:29:51.204 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:51.204 Transport Address: 10.0.0.2 [2024-11-02 14:45:43.081434] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:51.204 [2024-11-02 14:45:43.081457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc440) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.204 [2024-11-02 14:45:43.081485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc5c0) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.204 [2024-11-02 14:45:43.081501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc740) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.204 [2024-11-02 14:45:43.081516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.204 [2024-11-02 14:45:43.081538] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.204 [2024-11-02 14:45:43.081563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.204 [2024-11-02 14:45:43.081591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.204 [2024-11-02 14:45:43.081706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.204 [2024-11-02 14:45:43.081721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.204 [2024-11-02 14:45:43.081728] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081747] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.204 [2024-11-02 14:45:43.081772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.204 [2024-11-02 14:45:43.081799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.204 [2024-11-02 14:45:43.081929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.204 [2024-11-02 14:45:43.081943] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.204 [2024-11-02 14:45:43.081950] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.081957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.204 [2024-11-02 14:45:43.081967] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:51.204 [2024-11-02 14:45:43.081981] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:51.204 [2024-11-02 14:45:43.081998] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.082007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.204 [2024-11-02 14:45:43.082014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.204 [2024-11-02 14:45:43.082024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.204 [2024-11-02 14:45:43.082045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.204 [2024-11-02 14:45:43.082173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.204 [2024-11-02 14:45:43.082185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.204 [2024-11-02 14:45:43.082192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.082221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.082247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.082275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.082387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.082399] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.082405] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082412] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.082428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082443] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.082453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.082474] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.082583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.082595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.082602] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.082624] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.082650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.082670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.082795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.082806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.082813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082820] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.082835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.082850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.082860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.082880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.082989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.083000] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.083007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.083036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.083062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.083082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.083194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.083205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.083212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083219] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.083234] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.083270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.083293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.083410] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.083425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.083432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.083454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083470] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.083480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.083501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.083611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.083626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.083632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.083655] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083671] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.083681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.083701] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.083809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.083824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.083831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083837] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.083854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.083874] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.083884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.083905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.084018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.084033] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.084040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.084063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.084088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.084109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.084218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.084229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.084236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.084266] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084276] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.084293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.084314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.084425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.084439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.084446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.205 [2024-11-02 14:45:43.084469] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.205 [2024-11-02 14:45:43.084484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.205 [2024-11-02 14:45:43.084495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.205 [2024-11-02 14:45:43.084515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.205 [2024-11-02 14:45:43.084625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.205 [2024-11-02 14:45:43.084640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.205 [2024-11-02 14:45:43.084647] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084653] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.206 [2024-11-02 14:45:43.084669] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084678] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084688] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.206 [2024-11-02 14:45:43.084700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.084720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.206 [2024-11-02 14:45:43.084828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.084843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.084849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084856] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.206 [2024-11-02 14:45:43.084872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084881] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.084888] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.206 [2024-11-02 14:45:43.084898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.084919] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.206 [2024-11-02 14:45:43.085026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.085041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.085048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.085054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.206 [2024-11-02 14:45:43.085070] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.085079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.085086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.206 [2024-11-02 14:45:43.085096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.085117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.206 [2024-11-02 14:45:43.085225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.085237] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.085244] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.085251] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.206 [2024-11-02 14:45:43.089280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.089293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.089299] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1862210) 00:29:51.206 [2024-11-02 14:45:43.089325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.089348] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18cc8c0, cid 3, qid 0 00:29:51.206 [2024-11-02 14:45:43.089480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.089495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.089502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.089509] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18cc8c0) on tqpair=0x1862210 00:29:51.206 [2024-11-02 14:45:43.089522] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:51.206 00:29:51.206 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:51.206 [2024-11-02 14:45:43.126140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:51.206 [2024-11-02 14:45:43.126186] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472049 ] 00:29:51.206 [2024-11-02 14:45:43.160984] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:51.206 [2024-11-02 14:45:43.161038] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:51.206 [2024-11-02 14:45:43.161047] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:51.206 [2024-11-02 14:45:43.161063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:51.206 [2024-11-02 14:45:43.161077] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:51.206 [2024-11-02 14:45:43.161540] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:51.206 [2024-11-02 14:45:43.161596] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe93210 0 00:29:51.206 [2024-11-02 14:45:43.168282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:51.206 [2024-11-02 14:45:43.168303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:51.206 [2024-11-02 14:45:43.168310] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:51.206 [2024-11-02 14:45:43.168316] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:51.206 [2024-11-02 14:45:43.168345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.168356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.168363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.206 [2024-11-02 14:45:43.168377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:51.206 [2024-11-02 14:45:43.168403] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.206 [2024-11-02 14:45:43.175283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.175309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.175317] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175324] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.206 [2024-11-02 14:45:43.175342] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:51.206 [2024-11-02 14:45:43.175352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:51.206 [2024-11-02 14:45:43.175361] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:51.206 [2024-11-02 14:45:43.175378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.206 [2024-11-02 14:45:43.175404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.175428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.206 [2024-11-02 14:45:43.175588] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.175604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.175611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175618] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.206 [2024-11-02 14:45:43.175626] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:51.206 [2024-11-02 14:45:43.175639] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:51.206 [2024-11-02 14:45:43.175652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.206 [2024-11-02 14:45:43.175677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.175699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.206 [2024-11-02 14:45:43.175815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.175827] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.175834] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.206 [2024-11-02 14:45:43.175849] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:51.206 [2024-11-02 14:45:43.175862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:51.206 [2024-11-02 14:45:43.175875] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175882] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.175888] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.206 [2024-11-02 14:45:43.175899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.175920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.206 [2024-11-02 14:45:43.176032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.206 [2024-11-02 14:45:43.176045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.206 [2024-11-02 14:45:43.176051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.176058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.206 [2024-11-02 14:45:43.176066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:51.206 [2024-11-02 14:45:43.176082] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.176091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.206 [2024-11-02 14:45:43.176097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.206 [2024-11-02 14:45:43.176108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.206 [2024-11-02 14:45:43.176129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.207 [2024-11-02 14:45:43.176237] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.207 [2024-11-02 14:45:43.176252] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.207 [2024-11-02 14:45:43.176267] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.207 [2024-11-02 14:45:43.176286] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:51.207 [2024-11-02 14:45:43.176295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:51.207 [2024-11-02 14:45:43.176309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:51.207 [2024-11-02 14:45:43.176418] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:51.207 [2024-11-02 14:45:43.176426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:51.207 [2024-11-02 14:45:43.176437] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176451] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.176477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.207 [2024-11-02 14:45:43.176499] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.207 [2024-11-02 14:45:43.176665] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.207 [2024-11-02 14:45:43.176678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.207 [2024-11-02 14:45:43.176684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.207 [2024-11-02 14:45:43.176699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:51.207 [2024-11-02 14:45:43.176715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.176741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.207 [2024-11-02 14:45:43.176762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.207 [2024-11-02 14:45:43.176875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.207 [2024-11-02 14:45:43.176887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.207 [2024-11-02 14:45:43.176894] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176900] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.207 [2024-11-02 14:45:43.176908] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:51.207 [2024-11-02 14:45:43.176916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.176928] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:51.207 [2024-11-02 14:45:43.176942] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.176956] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.176963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.176974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.207 [2024-11-02 14:45:43.176998] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.207 [2024-11-02 14:45:43.177149] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.207 [2024-11-02 14:45:43.177161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.207 [2024-11-02 14:45:43.177168] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=4096, cccid=0 00:29:51.207 [2024-11-02 14:45:43.177181] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefd440) on tqpair(0xe93210): expected_datao=0, payload_size=4096 00:29:51.207 [2024-11-02 14:45:43.177189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177205] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177214] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.207 [2024-11-02 14:45:43.177294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.207 [2024-11-02 14:45:43.177300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.207 [2024-11-02 14:45:43.177317] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:51.207 [2024-11-02 14:45:43.177325] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:51.207 [2024-11-02 14:45:43.177332] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:51.207 [2024-11-02 14:45:43.177339] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:51.207 [2024-11-02 14:45:43.177346] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:51.207 [2024-11-02 14:45:43.177354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.177368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.177380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.207 [2024-11-02 14:45:43.177426] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.207 [2024-11-02 14:45:43.177538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.207 [2024-11-02 14:45:43.177550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.207 [2024-11-02 14:45:43.177557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.207 [2024-11-02 14:45:43.177574] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.207 [2024-11-02 14:45:43.177607] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177623] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.207 [2024-11-02 14:45:43.177643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.207 [2024-11-02 14:45:43.177673] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177686] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.207 [2024-11-02 14:45:43.177703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.177721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:51.207 [2024-11-02 14:45:43.177734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.207 [2024-11-02 14:45:43.177741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.207 [2024-11-02 14:45:43.177752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.207 [2024-11-02 14:45:43.177774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd440, cid 0, qid 0 00:29:51.208 [2024-11-02 14:45:43.177785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd5c0, cid 1, qid 0 00:29:51.208 [2024-11-02 14:45:43.177793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd740, cid 2, qid 0 00:29:51.208 [2024-11-02 14:45:43.177800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd8c0, cid 3, qid 0 00:29:51.208 [2024-11-02 14:45:43.177808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.208 [2024-11-02 14:45:43.177982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.208 [2024-11-02 14:45:43.177994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.208 [2024-11-02 14:45:43.178001] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178008] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.208 [2024-11-02 14:45:43.178015] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:51.208 [2024-11-02 14:45:43.178023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178054] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.208 [2024-11-02 14:45:43.178090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.208 [2024-11-02 14:45:43.178115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.208 [2024-11-02 14:45:43.178268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.208 [2024-11-02 14:45:43.178282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.208 [2024-11-02 14:45:43.178289] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.208 [2024-11-02 14:45:43.178368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178404] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.208 [2024-11-02 14:45:43.178422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.208 [2024-11-02 14:45:43.178444] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.208 [2024-11-02 14:45:43.178613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.208 [2024-11-02 14:45:43.178629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.208 [2024-11-02 14:45:43.178636] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178642] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=4096, cccid=4 00:29:51.208 [2024-11-02 14:45:43.178649] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefda40) on tqpair(0xe93210): expected_datao=0, payload_size=4096 00:29:51.208 [2024-11-02 14:45:43.178657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178666] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178674] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.208 [2024-11-02 14:45:43.178695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.208 [2024-11-02 14:45:43.178702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.208 [2024-11-02 14:45:43.178725] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:51.208 [2024-11-02 14:45:43.178748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.178780] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.178788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.208 [2024-11-02 14:45:43.178798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.208 [2024-11-02 14:45:43.178820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.208 [2024-11-02 14:45:43.179020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.208 [2024-11-02 14:45:43.179036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.208 [2024-11-02 14:45:43.179043] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.179049] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=4096, cccid=4 00:29:51.208 [2024-11-02 14:45:43.179061] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefda40) on tqpair(0xe93210): expected_datao=0, payload_size=4096 00:29:51.208 [2024-11-02 14:45:43.179068] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.179079] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.179086] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.208 [2024-11-02 14:45:43.221288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.208 [2024-11-02 14:45:43.221295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.208 [2024-11-02 14:45:43.221339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.221359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:51.208 [2024-11-02 14:45:43.221373] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221381] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.208 [2024-11-02 14:45:43.221392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.208 [2024-11-02 14:45:43.221415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.208 [2024-11-02 14:45:43.221600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.208 [2024-11-02 14:45:43.221612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.208 [2024-11-02 14:45:43.221619] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=4096, cccid=4 00:29:51.208 [2024-11-02 14:45:43.221633] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefda40) on tqpair(0xe93210): expected_datao=0, payload_size=4096 00:29:51.208 [2024-11-02 14:45:43.221640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221656] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.208 [2024-11-02 14:45:43.221665] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.468 [2024-11-02 14:45:43.262419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.468 [2024-11-02 14:45:43.262427] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262435] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.468 [2024-11-02 14:45:43.262449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262521] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:51.468 [2024-11-02 14:45:43.262533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:51.468 [2024-11-02 14:45:43.262542] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:51.468 [2024-11-02 14:45:43.262561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.468 [2024-11-02 14:45:43.262581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.468 [2024-11-02 14:45:43.262592] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262606] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe93210) 00:29:51.468 [2024-11-02 14:45:43.262615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.468 [2024-11-02 14:45:43.262638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.468 [2024-11-02 14:45:43.262649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdbc0, cid 5, qid 0 00:29:51.468 [2024-11-02 14:45:43.262771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.468 [2024-11-02 14:45:43.262786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.468 [2024-11-02 14:45:43.262793] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.468 [2024-11-02 14:45:43.262809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.468 [2024-11-02 14:45:43.262818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.468 [2024-11-02 14:45:43.262825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdbc0) on tqpair=0xe93210 00:29:51.468 [2024-11-02 14:45:43.262847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.262855] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe93210) 00:29:51.468 [2024-11-02 14:45:43.262866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.468 [2024-11-02 14:45:43.262887] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdbc0, cid 5, qid 0 00:29:51.468 [2024-11-02 14:45:43.263020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.468 [2024-11-02 14:45:43.263035] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.468 [2024-11-02 14:45:43.263042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.263049] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdbc0) on tqpair=0xe93210 00:29:51.468 [2024-11-02 14:45:43.263064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.468 [2024-11-02 14:45:43.263073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe93210) 00:29:51.468 [2024-11-02 14:45:43.263084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.468 [2024-11-02 14:45:43.263104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdbc0, cid 5, qid 0 00:29:51.468 [2024-11-02 14:45:43.263215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.468 [2024-11-02 14:45:43.263227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.263234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdbc0) on tqpair=0xe93210 00:29:51.469 [2024-11-02 14:45:43.263268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe93210) 00:29:51.469 [2024-11-02 14:45:43.263290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.469 [2024-11-02 14:45:43.263311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdbc0, cid 5, qid 0 00:29:51.469 [2024-11-02 14:45:43.263440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.469 [2024-11-02 14:45:43.263455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.263462] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdbc0) on tqpair=0xe93210 00:29:51.469 [2024-11-02 14:45:43.263493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263504] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe93210) 00:29:51.469 [2024-11-02 14:45:43.263515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.469 [2024-11-02 14:45:43.263526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263534] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe93210) 00:29:51.469 [2024-11-02 14:45:43.263543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.469 [2024-11-02 14:45:43.263554] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe93210) 00:29:51.469 [2024-11-02 14:45:43.263570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.469 [2024-11-02 14:45:43.263585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263594] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe93210) 00:29:51.469 [2024-11-02 14:45:43.263603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.469 [2024-11-02 14:45:43.263625] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdbc0, cid 5, qid 0 00:29:51.469 [2024-11-02 14:45:43.263636] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefda40, cid 4, qid 0 00:29:51.469 [2024-11-02 14:45:43.263644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdd40, cid 6, qid 0 00:29:51.469 [2024-11-02 14:45:43.263652] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdec0, cid 7, qid 0 00:29:51.469 [2024-11-02 14:45:43.263945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.469 [2024-11-02 14:45:43.263960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.469 [2024-11-02 14:45:43.263967] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263974] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=8192, cccid=5 00:29:51.469 [2024-11-02 14:45:43.263981] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdbc0) on tqpair(0xe93210): expected_datao=0, payload_size=8192 00:29:51.469 [2024-11-02 14:45:43.263989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.263999] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264006] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.469 [2024-11-02 14:45:43.264024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.469 [2024-11-02 14:45:43.264034] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264041] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=512, cccid=4 00:29:51.469 [2024-11-02 14:45:43.264048] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefda40) on tqpair(0xe93210): expected_datao=0, payload_size=512 00:29:51.469 [2024-11-02 14:45:43.264055] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264064] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264071] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.469 [2024-11-02 14:45:43.264088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.469 [2024-11-02 14:45:43.264095] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=512, cccid=6 00:29:51.469 [2024-11-02 14:45:43.264108] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdd40) on tqpair(0xe93210): expected_datao=0, payload_size=512 00:29:51.469 [2024-11-02 14:45:43.264115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264124] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264131] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264139] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:51.469 [2024-11-02 14:45:43.264148] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:51.469 [2024-11-02 14:45:43.264154] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264160] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe93210): datao=0, datal=4096, cccid=7 00:29:51.469 [2024-11-02 14:45:43.264168] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdec0) on tqpair(0xe93210): expected_datao=0, payload_size=4096 00:29:51.469 [2024-11-02 14:45:43.264175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264184] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264191] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.469 [2024-11-02 14:45:43.264212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.264219] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdbc0) on tqpair=0xe93210 00:29:51.469 [2024-11-02 14:45:43.264243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.469 [2024-11-02 14:45:43.264262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.264271] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264277] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefda40) on tqpair=0xe93210 00:29:51.469 [2024-11-02 14:45:43.264293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.469 [2024-11-02 14:45:43.264303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.264310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdd40) on tqpair=0xe93210 00:29:51.469 [2024-11-02 14:45:43.264327] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.469 [2024-11-02 14:45:43.264336] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.469 [2024-11-02 14:45:43.264343] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.469 [2024-11-02 14:45:43.264349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdec0) on tqpair=0xe93210 00:29:51.469 ===================================================== 00:29:51.469 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.469 ===================================================== 00:29:51.469 Controller Capabilities/Features 00:29:51.469 ================================ 00:29:51.469 Vendor ID: 8086 00:29:51.469 Subsystem Vendor ID: 8086 00:29:51.469 Serial Number: SPDK00000000000001 00:29:51.469 Model Number: SPDK bdev Controller 00:29:51.469 Firmware Version: 24.09.1 00:29:51.469 Recommended Arb Burst: 6 00:29:51.469 IEEE OUI Identifier: e4 d2 5c 00:29:51.469 Multi-path I/O 00:29:51.469 May have multiple subsystem ports: Yes 00:29:51.469 May have multiple controllers: Yes 00:29:51.469 Associated with SR-IOV VF: No 00:29:51.469 Max Data Transfer Size: 131072 00:29:51.469 Max Number of Namespaces: 32 00:29:51.469 Max Number of I/O Queues: 127 00:29:51.469 NVMe Specification Version (VS): 1.3 00:29:51.469 NVMe Specification Version (Identify): 1.3 00:29:51.469 Maximum Queue Entries: 128 00:29:51.469 Contiguous Queues Required: Yes 00:29:51.469 Arbitration Mechanisms Supported 00:29:51.469 Weighted Round Robin: Not Supported 00:29:51.469 Vendor Specific: Not Supported 00:29:51.469 Reset Timeout: 15000 ms 00:29:51.469 Doorbell Stride: 4 bytes 00:29:51.469 NVM Subsystem Reset: Not Supported 00:29:51.469 Command Sets Supported 00:29:51.469 NVM Command Set: Supported 00:29:51.469 Boot Partition: Not Supported 00:29:51.469 Memory Page Size Minimum: 4096 bytes 00:29:51.469 Memory Page Size Maximum: 4096 bytes 00:29:51.469 Persistent Memory Region: Not Supported 00:29:51.469 Optional Asynchronous Events Supported 00:29:51.469 Namespace Attribute Notices: Supported 00:29:51.469 Firmware Activation Notices: Not Supported 00:29:51.469 ANA Change Notices: Not Supported 00:29:51.469 PLE Aggregate Log Change Notices: Not Supported 00:29:51.469 LBA Status Info Alert Notices: Not Supported 00:29:51.469 EGE Aggregate Log Change Notices: Not Supported 00:29:51.469 Normal NVM Subsystem Shutdown event: Not Supported 00:29:51.469 Zone Descriptor Change Notices: Not Supported 00:29:51.469 Discovery Log Change Notices: Not Supported 00:29:51.469 Controller Attributes 00:29:51.469 128-bit Host Identifier: Supported 00:29:51.469 Non-Operational Permissive Mode: Not Supported 00:29:51.469 NVM Sets: Not Supported 00:29:51.469 Read Recovery Levels: Not Supported 00:29:51.469 Endurance Groups: Not Supported 00:29:51.469 Predictable Latency Mode: Not Supported 00:29:51.469 Traffic Based Keep ALive: Not Supported 00:29:51.469 Namespace Granularity: Not Supported 00:29:51.469 SQ Associations: Not Supported 00:29:51.469 UUID List: Not Supported 00:29:51.469 Multi-Domain Subsystem: Not Supported 00:29:51.469 Fixed Capacity Management: Not Supported 00:29:51.469 Variable Capacity Management: Not Supported 00:29:51.470 Delete Endurance Group: Not Supported 00:29:51.470 Delete NVM Set: Not Supported 00:29:51.470 Extended LBA Formats Supported: Not Supported 00:29:51.470 Flexible Data Placement Supported: Not Supported 00:29:51.470 00:29:51.470 Controller Memory Buffer Support 00:29:51.470 ================================ 00:29:51.470 Supported: No 00:29:51.470 00:29:51.470 Persistent Memory Region Support 00:29:51.470 ================================ 00:29:51.470 Supported: No 00:29:51.470 00:29:51.470 Admin Command Set Attributes 00:29:51.470 ============================ 00:29:51.470 Security Send/Receive: Not Supported 00:29:51.470 Format NVM: Not Supported 00:29:51.470 Firmware Activate/Download: Not Supported 00:29:51.470 Namespace Management: Not Supported 00:29:51.470 Device Self-Test: Not Supported 00:29:51.470 Directives: Not Supported 00:29:51.470 NVMe-MI: Not Supported 00:29:51.470 Virtualization Management: Not Supported 00:29:51.470 Doorbell Buffer Config: Not Supported 00:29:51.470 Get LBA Status Capability: Not Supported 00:29:51.470 Command & Feature Lockdown Capability: Not Supported 00:29:51.470 Abort Command Limit: 4 00:29:51.470 Async Event Request Limit: 4 00:29:51.470 Number of Firmware Slots: N/A 00:29:51.470 Firmware Slot 1 Read-Only: N/A 00:29:51.470 Firmware Activation Without Reset: N/A 00:29:51.470 Multiple Update Detection Support: N/A 00:29:51.470 Firmware Update Granularity: No Information Provided 00:29:51.470 Per-Namespace SMART Log: No 00:29:51.470 Asymmetric Namespace Access Log Page: Not Supported 00:29:51.470 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:51.470 Command Effects Log Page: Supported 00:29:51.470 Get Log Page Extended Data: Supported 00:29:51.470 Telemetry Log Pages: Not Supported 00:29:51.470 Persistent Event Log Pages: Not Supported 00:29:51.470 Supported Log Pages Log Page: May Support 00:29:51.470 Commands Supported & Effects Log Page: Not Supported 00:29:51.470 Feature Identifiers & Effects Log Page:May Support 00:29:51.470 NVMe-MI Commands & Effects Log Page: May Support 00:29:51.470 Data Area 4 for Telemetry Log: Not Supported 00:29:51.470 Error Log Page Entries Supported: 128 00:29:51.470 Keep Alive: Supported 00:29:51.470 Keep Alive Granularity: 10000 ms 00:29:51.470 00:29:51.470 NVM Command Set Attributes 00:29:51.470 ========================== 00:29:51.470 Submission Queue Entry Size 00:29:51.470 Max: 64 00:29:51.470 Min: 64 00:29:51.470 Completion Queue Entry Size 00:29:51.470 Max: 16 00:29:51.470 Min: 16 00:29:51.470 Number of Namespaces: 32 00:29:51.470 Compare Command: Supported 00:29:51.470 Write Uncorrectable Command: Not Supported 00:29:51.470 Dataset Management Command: Supported 00:29:51.470 Write Zeroes Command: Supported 00:29:51.470 Set Features Save Field: Not Supported 00:29:51.470 Reservations: Supported 00:29:51.470 Timestamp: Not Supported 00:29:51.470 Copy: Supported 00:29:51.470 Volatile Write Cache: Present 00:29:51.470 Atomic Write Unit (Normal): 1 00:29:51.470 Atomic Write Unit (PFail): 1 00:29:51.470 Atomic Compare & Write Unit: 1 00:29:51.470 Fused Compare & Write: Supported 00:29:51.470 Scatter-Gather List 00:29:51.470 SGL Command Set: Supported 00:29:51.470 SGL Keyed: Supported 00:29:51.470 SGL Bit Bucket Descriptor: Not Supported 00:29:51.470 SGL Metadata Pointer: Not Supported 00:29:51.470 Oversized SGL: Not Supported 00:29:51.470 SGL Metadata Address: Not Supported 00:29:51.470 SGL Offset: Supported 00:29:51.470 Transport SGL Data Block: Not Supported 00:29:51.470 Replay Protected Memory Block: Not Supported 00:29:51.470 00:29:51.470 Firmware Slot Information 00:29:51.470 ========================= 00:29:51.470 Active slot: 1 00:29:51.470 Slot 1 Firmware Revision: 24.09.1 00:29:51.470 00:29:51.470 00:29:51.470 Commands Supported and Effects 00:29:51.470 ============================== 00:29:51.470 Admin Commands 00:29:51.470 -------------- 00:29:51.470 Get Log Page (02h): Supported 00:29:51.470 Identify (06h): Supported 00:29:51.470 Abort (08h): Supported 00:29:51.470 Set Features (09h): Supported 00:29:51.470 Get Features (0Ah): Supported 00:29:51.470 Asynchronous Event Request (0Ch): Supported 00:29:51.470 Keep Alive (18h): Supported 00:29:51.470 I/O Commands 00:29:51.470 ------------ 00:29:51.470 Flush (00h): Supported LBA-Change 00:29:51.470 Write (01h): Supported LBA-Change 00:29:51.470 Read (02h): Supported 00:29:51.470 Compare (05h): Supported 00:29:51.470 Write Zeroes (08h): Supported LBA-Change 00:29:51.470 Dataset Management (09h): Supported LBA-Change 00:29:51.470 Copy (19h): Supported LBA-Change 00:29:51.470 00:29:51.470 Error Log 00:29:51.470 ========= 00:29:51.470 00:29:51.470 Arbitration 00:29:51.470 =========== 00:29:51.470 Arbitration Burst: 1 00:29:51.470 00:29:51.470 Power Management 00:29:51.470 ================ 00:29:51.470 Number of Power States: 1 00:29:51.470 Current Power State: Power State #0 00:29:51.470 Power State #0: 00:29:51.470 Max Power: 0.00 W 00:29:51.470 Non-Operational State: Operational 00:29:51.470 Entry Latency: Not Reported 00:29:51.470 Exit Latency: Not Reported 00:29:51.470 Relative Read Throughput: 0 00:29:51.470 Relative Read Latency: 0 00:29:51.470 Relative Write Throughput: 0 00:29:51.470 Relative Write Latency: 0 00:29:51.470 Idle Power: Not Reported 00:29:51.470 Active Power: Not Reported 00:29:51.470 Non-Operational Permissive Mode: Not Supported 00:29:51.470 00:29:51.470 Health Information 00:29:51.470 ================== 00:29:51.470 Critical Warnings: 00:29:51.470 Available Spare Space: OK 00:29:51.470 Temperature: OK 00:29:51.470 Device Reliability: OK 00:29:51.470 Read Only: No 00:29:51.470 Volatile Memory Backup: OK 00:29:51.470 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:51.470 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:51.470 Available Spare: 0% 00:29:51.470 Available Spare Threshold: 0% 00:29:51.470 Life Percentage U[2024-11-02 14:45:43.264514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.264545] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe93210) 00:29:51.470 [2024-11-02 14:45:43.264557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.470 [2024-11-02 14:45:43.264580] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdec0, cid 7, qid 0 00:29:51.470 [2024-11-02 14:45:43.264744] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.470 [2024-11-02 14:45:43.264757] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.470 [2024-11-02 14:45:43.264764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.264771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdec0) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.264814] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:51.470 [2024-11-02 14:45:43.264833] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd440) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.264844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.470 [2024-11-02 14:45:43.264852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd5c0) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.264860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.470 [2024-11-02 14:45:43.264868] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd740) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.264875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.470 [2024-11-02 14:45:43.264884] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd8c0) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.264892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.470 [2024-11-02 14:45:43.264906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.264915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.264923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe93210) 00:29:51.470 [2024-11-02 14:45:43.264934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.470 [2024-11-02 14:45:43.264963] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd8c0, cid 3, qid 0 00:29:51.470 [2024-11-02 14:45:43.265108] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.470 [2024-11-02 14:45:43.265131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.470 [2024-11-02 14:45:43.265146] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.265159] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd8c0) on tqpair=0xe93210 00:29:51.470 [2024-11-02 14:45:43.265177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.265187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.265193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe93210) 00:29:51.470 [2024-11-02 14:45:43.265204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.470 [2024-11-02 14:45:43.265245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd8c0, cid 3, qid 0 00:29:51.470 [2024-11-02 14:45:43.269282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.470 [2024-11-02 14:45:43.269296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.470 [2024-11-02 14:45:43.269318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.470 [2024-11-02 14:45:43.269327] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd8c0) on tqpair=0xe93210 00:29:51.471 [2024-11-02 14:45:43.269340] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:51.471 [2024-11-02 14:45:43.269349] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:51.471 [2024-11-02 14:45:43.269366] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:51.471 [2024-11-02 14:45:43.269376] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:51.471 [2024-11-02 14:45:43.269383] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe93210) 00:29:51.471 [2024-11-02 14:45:43.269394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.471 [2024-11-02 14:45:43.269417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd8c0, cid 3, qid 0 00:29:51.471 [2024-11-02 14:45:43.269562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:51.471 [2024-11-02 14:45:43.269577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:51.471 [2024-11-02 14:45:43.269584] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:51.471 [2024-11-02 14:45:43.269591] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd8c0) on tqpair=0xe93210 00:29:51.471 [2024-11-02 14:45:43.269605] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:29:51.471 sed: 0% 00:29:51.471 Data Units Read: 0 00:29:51.471 Data Units Written: 0 00:29:51.471 Host Read Commands: 0 00:29:51.471 Host Write Commands: 0 00:29:51.471 Controller Busy Time: 0 minutes 00:29:51.471 Power Cycles: 0 00:29:51.471 Power On Hours: 0 hours 00:29:51.471 Unsafe Shutdowns: 0 00:29:51.471 Unrecoverable Media Errors: 0 00:29:51.471 Lifetime Error Log Entries: 0 00:29:51.471 Warning Temperature Time: 0 minutes 00:29:51.471 Critical Temperature Time: 0 minutes 00:29:51.471 00:29:51.471 Number of Queues 00:29:51.471 ================ 00:29:51.471 Number of I/O Submission Queues: 127 00:29:51.471 Number of I/O Completion Queues: 127 00:29:51.471 00:29:51.471 Active Namespaces 00:29:51.471 ================= 00:29:51.471 Namespace ID:1 00:29:51.471 Error Recovery Timeout: Unlimited 00:29:51.471 Command Set Identifier: NVM (00h) 00:29:51.471 Deallocate: Supported 00:29:51.471 Deallocated/Unwritten Error: Not Supported 00:29:51.471 Deallocated Read Value: Unknown 00:29:51.471 Deallocate in Write Zeroes: Not Supported 00:29:51.471 Deallocated Guard Field: 0xFFFF 00:29:51.471 Flush: Supported 00:29:51.471 Reservation: Supported 00:29:51.471 Namespace Sharing Capabilities: Multiple Controllers 00:29:51.471 Size (in LBAs): 131072 (0GiB) 00:29:51.471 Capacity (in LBAs): 131072 (0GiB) 00:29:51.471 Utilization (in LBAs): 131072 (0GiB) 00:29:51.471 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:51.471 EUI64: ABCDEF0123456789 00:29:51.471 UUID: f6540a69-5c66-4557-a576-74d354af230e 00:29:51.471 Thin Provisioning: Not Supported 00:29:51.471 Per-NS Atomic Units: Yes 00:29:51.471 Atomic Boundary Size (Normal): 0 00:29:51.471 Atomic Boundary Size (PFail): 0 00:29:51.471 Atomic Boundary Offset: 0 00:29:51.471 Maximum Single Source Range Length: 65535 00:29:51.471 Maximum Copy Length: 65535 00:29:51.471 Maximum Source Range Count: 1 00:29:51.471 NGUID/EUI64 Never Reused: No 00:29:51.471 Namespace Write Protected: No 00:29:51.471 Number of LBA Formats: 1 00:29:51.471 Current LBA Format: LBA Format #00 00:29:51.471 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:51.471 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.471 rmmod nvme_tcp 00:29:51.471 rmmod nvme_fabrics 00:29:51.471 rmmod nvme_keyring 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 1471904 ']' 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 1471904 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1471904 ']' 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1471904 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471904 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471904' 00:29:51.471 killing process with pid 1471904 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1471904 00:29:51.471 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1471904 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.730 14:45:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.735 00:29:53.735 real 0m5.737s 00:29:53.735 user 0m4.846s 00:29:53.735 sys 0m2.054s 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.735 ************************************ 00:29:53.735 END TEST nvmf_identify 00:29:53.735 ************************************ 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.735 ************************************ 00:29:53.735 START TEST nvmf_perf 00:29:53.735 ************************************ 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.735 * Looking for test storage... 00:29:53.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:53.735 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.995 --rc genhtml_branch_coverage=1 00:29:53.995 --rc genhtml_function_coverage=1 00:29:53.995 --rc genhtml_legend=1 00:29:53.995 --rc geninfo_all_blocks=1 00:29:53.995 --rc geninfo_unexecuted_blocks=1 00:29:53.995 00:29:53.995 ' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.995 --rc genhtml_branch_coverage=1 00:29:53.995 --rc genhtml_function_coverage=1 00:29:53.995 --rc genhtml_legend=1 00:29:53.995 --rc geninfo_all_blocks=1 00:29:53.995 --rc geninfo_unexecuted_blocks=1 00:29:53.995 00:29:53.995 ' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.995 --rc genhtml_branch_coverage=1 00:29:53.995 --rc genhtml_function_coverage=1 00:29:53.995 --rc genhtml_legend=1 00:29:53.995 --rc geninfo_all_blocks=1 00:29:53.995 --rc geninfo_unexecuted_blocks=1 00:29:53.995 00:29:53.995 ' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.995 --rc genhtml_branch_coverage=1 00:29:53.995 --rc genhtml_function_coverage=1 00:29:53.995 --rc genhtml_legend=1 00:29:53.995 --rc geninfo_all_blocks=1 00:29:53.995 --rc geninfo_unexecuted_blocks=1 00:29:53.995 00:29:53.995 ' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.995 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.996 14:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:55.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:55.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:55.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:55.900 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:55.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:55.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:29:55.901 00:29:55.901 --- 10.0.0.2 ping statistics --- 00:29:55.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.901 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:29:55.901 00:29:55.901 --- 10.0.0.1 ping statistics --- 00:29:55.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.901 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=1473992 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 1473992 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1473992 ']' 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.901 14:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.161 [2024-11-02 14:45:47.987558] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:56.161 [2024-11-02 14:45:47.987651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.161 [2024-11-02 14:45:48.050860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.161 [2024-11-02 14:45:48.136704] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.161 [2024-11-02 14:45:48.136760] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.161 [2024-11-02 14:45:48.136788] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.161 [2024-11-02 14:45:48.136799] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.161 [2024-11-02 14:45:48.136809] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.161 [2024-11-02 14:45:48.136901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.161 [2024-11-02 14:45:48.136964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.161 [2024-11-02 14:45:48.137029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.161 [2024-11-02 14:45:48.137032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:56.420 14:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:59.705 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:59.705 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:59.705 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:59.705 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:59.964 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:59.964 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:59.964 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:59.964 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:59.964 14:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:00.221 [2024-11-02 14:45:52.227333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.222 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.479 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:00.479 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.737 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:00.737 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:00.996 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.563 [2024-11-02 14:45:53.323399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.563 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.563 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:01.563 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:01.563 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:01.563 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:02.943 Initializing NVMe Controllers 00:30:02.943 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:02.943 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:02.943 Initialization complete. Launching workers. 00:30:02.943 ======================================================== 00:30:02.943 Latency(us) 00:30:02.943 Device Information : IOPS MiB/s Average min max 00:30:02.943 PCIE (0000:88:00.0) NSID 1 from core 0: 85116.96 332.49 375.32 27.89 6263.16 00:30:02.943 ======================================================== 00:30:02.943 Total : 85116.96 332.49 375.32 27.89 6263.16 00:30:02.943 00:30:02.943 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.319 Initializing NVMe Controllers 00:30:04.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:04.319 Initialization complete. Launching workers. 00:30:04.319 ======================================================== 00:30:04.319 Latency(us) 00:30:04.319 Device Information : IOPS MiB/s Average min max 00:30:04.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.94 0.43 9013.62 174.79 45530.40 00:30:04.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.97 0.21 18916.16 7949.54 50872.21 00:30:04.319 ======================================================== 00:30:04.319 Total : 165.91 0.65 12294.58 174.79 50872.21 00:30:04.319 00:30:04.319 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.256 Initializing NVMe Controllers 00:30:05.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:05.256 Initialization complete. Launching workers. 00:30:05.256 ======================================================== 00:30:05.256 Latency(us) 00:30:05.256 Device Information : IOPS MiB/s Average min max 00:30:05.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8311.98 32.47 3862.54 612.00 10464.99 00:30:05.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3808.99 14.88 8449.54 4945.01 20231.32 00:30:05.256 ======================================================== 00:30:05.256 Total : 12120.98 47.35 5303.99 612.00 20231.32 00:30:05.256 00:30:05.514 14:45:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:05.514 14:45:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:05.514 14:45:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.046 Initializing NVMe Controllers 00:30:08.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.046 Controller IO queue size 128, less than required. 00:30:08.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.046 Controller IO queue size 128, less than required. 00:30:08.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:08.046 Initialization complete. Launching workers. 00:30:08.046 ======================================================== 00:30:08.046 Latency(us) 00:30:08.046 Device Information : IOPS MiB/s Average min max 00:30:08.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1188.15 297.04 109920.64 69463.87 158866.94 00:30:08.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.81 151.45 218116.12 103248.66 342633.08 00:30:08.046 ======================================================== 00:30:08.046 Total : 1793.96 448.49 146457.70 69463.87 342633.08 00:30:08.046 00:30:08.046 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:08.305 No valid NVMe controllers or AIO or URING devices found 00:30:08.305 Initializing NVMe Controllers 00:30:08.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.305 Controller IO queue size 128, less than required. 00:30:08.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.305 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:08.305 Controller IO queue size 128, less than required. 00:30:08.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.305 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:08.305 WARNING: Some requested NVMe devices were skipped 00:30:08.305 14:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:11.589 Initializing NVMe Controllers 00:30:11.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.589 Controller IO queue size 128, less than required. 00:30:11.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.589 Controller IO queue size 128, less than required. 00:30:11.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:11.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:11.589 Initialization complete. Launching workers. 00:30:11.589 00:30:11.589 ==================== 00:30:11.589 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:11.589 TCP transport: 00:30:11.589 polls: 16558 00:30:11.589 idle_polls: 6240 00:30:11.589 sock_completions: 10318 00:30:11.589 nvme_completions: 4807 00:30:11.589 submitted_requests: 7194 00:30:11.589 queued_requests: 1 00:30:11.589 00:30:11.589 ==================== 00:30:11.589 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:11.589 TCP transport: 00:30:11.589 polls: 20958 00:30:11.589 idle_polls: 10018 00:30:11.589 sock_completions: 10940 00:30:11.589 nvme_completions: 5103 00:30:11.589 submitted_requests: 7642 00:30:11.589 queued_requests: 1 00:30:11.589 ======================================================== 00:30:11.589 Latency(us) 00:30:11.589 Device Information : IOPS MiB/s Average min max 00:30:11.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.39 300.35 108230.97 51953.64 182539.26 00:30:11.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1275.39 318.85 102197.05 47937.07 162769.66 00:30:11.589 ======================================================== 00:30:11.589 Total : 2476.78 619.19 105123.88 47937.07 182539.26 00:30:11.589 00:30:11.589 14:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:11.589 14:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.589 14:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:11.589 14:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:11.589 14:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0e54540c-861f-4a61-8107-f9852773ed60 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0e54540c-861f-4a61-8107-f9852773ed60 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=0e54540c-861f-4a61-8107-f9852773ed60 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:14.878 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:15.136 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:15.136 { 00:30:15.136 "uuid": "0e54540c-861f-4a61-8107-f9852773ed60", 00:30:15.136 "name": "lvs_0", 00:30:15.136 "base_bdev": "Nvme0n1", 00:30:15.136 "total_data_clusters": 238234, 00:30:15.136 "free_clusters": 238234, 00:30:15.136 "block_size": 512, 00:30:15.136 "cluster_size": 4194304 00:30:15.136 } 00:30:15.136 ]' 00:30:15.136 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0e54540c-861f-4a61-8107-f9852773ed60") .free_clusters' 00:30:15.136 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:15.136 14:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0e54540c-861f-4a61-8107-f9852773ed60") .cluster_size' 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:15.136 952936 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:15.136 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e54540c-861f-4a61-8107-f9852773ed60 lbd_0 20480 00:30:15.703 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=0ff8b93c-f679-49b6-b8a0-b2a71b668e69 00:30:15.703 14:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0ff8b93c-f679-49b6-b8a0-b2a71b668e69 lvs_n_0 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=6ea89579-d635-4555-a845-c305ee1d3cb0 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 6ea89579-d635-4555-a845-c305ee1d3cb0 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6ea89579-d635-4555-a845-c305ee1d3cb0 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:16.270 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.528 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:16.528 { 00:30:16.528 "uuid": "0e54540c-861f-4a61-8107-f9852773ed60", 00:30:16.528 "name": "lvs_0", 00:30:16.528 "base_bdev": "Nvme0n1", 00:30:16.528 "total_data_clusters": 238234, 00:30:16.528 "free_clusters": 233114, 00:30:16.528 "block_size": 512, 00:30:16.528 "cluster_size": 4194304 00:30:16.528 }, 00:30:16.528 { 00:30:16.528 "uuid": "6ea89579-d635-4555-a845-c305ee1d3cb0", 00:30:16.528 "name": "lvs_n_0", 00:30:16.528 "base_bdev": "0ff8b93c-f679-49b6-b8a0-b2a71b668e69", 00:30:16.528 "total_data_clusters": 5114, 00:30:16.528 "free_clusters": 5114, 00:30:16.528 "block_size": 512, 00:30:16.528 "cluster_size": 4194304 00:30:16.528 } 00:30:16.528 ]' 00:30:16.528 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6ea89579-d635-4555-a845-c305ee1d3cb0") .free_clusters' 00:30:16.528 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:16.528 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6ea89579-d635-4555-a845-c305ee1d3cb0") .cluster_size' 00:30:16.786 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:16.786 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:16.786 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:16.786 20456 00:30:16.786 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:16.786 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ea89579-d635-4555-a845-c305ee1d3cb0 lbd_nest_0 20456 00:30:17.044 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b195908b-275f-4fd8-8cdb-8ea1525c3e8f 00:30:17.045 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.302 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:17.302 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b195908b-275f-4fd8-8cdb-8ea1525c3e8f 00:30:17.560 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.819 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:17.819 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:17.819 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:17.819 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:17.819 14:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.028 Initializing NVMe Controllers 00:30:30.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.028 Initialization complete. Launching workers. 00:30:30.028 ======================================================== 00:30:30.028 Latency(us) 00:30:30.028 Device Information : IOPS MiB/s Average min max 00:30:30.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.70 0.02 21921.98 210.70 47113.35 00:30:30.028 ======================================================== 00:30:30.028 Total : 45.70 0.02 21921.98 210.70 47113.35 00:30:30.028 00:30:30.028 14:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:30.028 14:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.000 Initializing NVMe Controllers 00:30:40.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.000 Initialization complete. Launching workers. 00:30:40.000 ======================================================== 00:30:40.000 Latency(us) 00:30:40.000 Device Information : IOPS MiB/s Average min max 00:30:40.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.90 8.99 13915.24 4567.82 51889.05 00:30:40.000 ======================================================== 00:30:40.000 Total : 71.90 8.99 13915.24 4567.82 51889.05 00:30:40.000 00:30:40.000 14:46:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:40.000 14:46:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:40.000 14:46:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:49.975 Initializing NVMe Controllers 00:30:49.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:49.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:49.975 Initialization complete. Launching workers. 00:30:49.975 ======================================================== 00:30:49.975 Latency(us) 00:30:49.975 Device Information : IOPS MiB/s Average min max 00:30:49.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7157.39 3.49 4470.47 293.17 12024.21 00:30:49.975 ======================================================== 00:30:49.975 Total : 7157.39 3.49 4470.47 293.17 12024.21 00:30:49.975 00:30:49.975 14:46:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:49.975 14:46:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.016 Initializing NVMe Controllers 00:31:00.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.016 Initialization complete. Launching workers. 00:31:00.016 ======================================================== 00:31:00.016 Latency(us) 00:31:00.016 Device Information : IOPS MiB/s Average min max 00:31:00.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2428.08 303.51 13189.64 704.43 28530.24 00:31:00.016 ======================================================== 00:31:00.016 Total : 2428.08 303.51 13189.64 704.43 28530.24 00:31:00.016 00:31:00.016 14:46:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:00.016 14:46:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:00.016 14:46:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.993 Initializing NVMe Controllers 00:31:09.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.993 Controller IO queue size 128, less than required. 00:31:09.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.993 Initialization complete. Launching workers. 00:31:09.993 ======================================================== 00:31:09.993 Latency(us) 00:31:09.993 Device Information : IOPS MiB/s Average min max 00:31:09.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11765.46 5.74 10879.59 1860.27 30337.16 00:31:09.993 ======================================================== 00:31:09.993 Total : 11765.46 5.74 10879.59 1860.27 30337.16 00:31:09.993 00:31:09.993 14:47:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:09.993 14:47:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:19.966 Initializing NVMe Controllers 00:31:19.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.966 Controller IO queue size 128, less than required. 00:31:19.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:19.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:19.966 Initialization complete. Launching workers. 00:31:19.966 ======================================================== 00:31:19.966 Latency(us) 00:31:19.966 Device Information : IOPS MiB/s Average min max 00:31:19.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1192.60 149.07 107763.25 15891.42 235507.01 00:31:19.967 ======================================================== 00:31:19.967 Total : 1192.60 149.07 107763.25 15891.42 235507.01 00:31:19.967 00:31:20.224 14:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.483 14:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b195908b-275f-4fd8-8cdb-8ea1525c3e8f 00:31:21.049 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:21.618 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ff8b93c-f679-49b6-b8a0-b2a71b668e69 00:31:21.877 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.134 14:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.134 rmmod nvme_tcp 00:31:22.134 rmmod nvme_fabrics 00:31:22.134 rmmod nvme_keyring 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 1473992 ']' 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 1473992 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1473992 ']' 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1473992 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473992 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:22.134 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:22.135 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473992' 00:31:22.135 killing process with pid 1473992 00:31:22.135 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1473992 00:31:22.135 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1473992 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.043 14:47:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:25.965 00:31:25.965 real 1m32.024s 00:31:25.965 user 5m39.345s 00:31:25.965 sys 0m15.503s 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.965 ************************************ 00:31:25.965 END TEST nvmf_perf 00:31:25.965 ************************************ 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.965 ************************************ 00:31:25.965 START TEST nvmf_fio_host 00:31:25.965 ************************************ 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:25.965 * Looking for test storage... 00:31:25.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.965 --rc genhtml_branch_coverage=1 00:31:25.965 --rc genhtml_function_coverage=1 00:31:25.965 --rc genhtml_legend=1 00:31:25.965 --rc geninfo_all_blocks=1 00:31:25.965 --rc geninfo_unexecuted_blocks=1 00:31:25.965 00:31:25.965 ' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.965 --rc genhtml_branch_coverage=1 00:31:25.965 --rc genhtml_function_coverage=1 00:31:25.965 --rc genhtml_legend=1 00:31:25.965 --rc geninfo_all_blocks=1 00:31:25.965 --rc geninfo_unexecuted_blocks=1 00:31:25.965 00:31:25.965 ' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.965 --rc genhtml_branch_coverage=1 00:31:25.965 --rc genhtml_function_coverage=1 00:31:25.965 --rc genhtml_legend=1 00:31:25.965 --rc geninfo_all_blocks=1 00:31:25.965 --rc geninfo_unexecuted_blocks=1 00:31:25.965 00:31:25.965 ' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:25.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.965 --rc genhtml_branch_coverage=1 00:31:25.965 --rc genhtml_function_coverage=1 00:31:25.965 --rc genhtml_legend=1 00:31:25.965 --rc geninfo_all_blocks=1 00:31:25.965 --rc geninfo_unexecuted_blocks=1 00:31:25.965 00:31:25.965 ' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.965 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:25.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.966 14:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:27.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:27.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:27.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:27.868 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.868 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.127 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.127 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.127 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.127 14:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:31:28.127 00:31:28.127 --- 10.0.0.2 ping statistics --- 00:31:28.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.127 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:31:28.127 00:31:28.127 --- 10.0.0.1 ping statistics --- 00:31:28.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.127 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1486092 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1486092 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1486092 ']' 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:28.127 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.127 [2024-11-02 14:47:20.101387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:28.127 [2024-11-02 14:47:20.101506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.385 [2024-11-02 14:47:20.189330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:28.385 [2024-11-02 14:47:20.283519] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.385 [2024-11-02 14:47:20.283594] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.386 [2024-11-02 14:47:20.283628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.386 [2024-11-02 14:47:20.283653] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.386 [2024-11-02 14:47:20.283668] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.386 [2024-11-02 14:47:20.283728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.386 [2024-11-02 14:47:20.283793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.386 [2024-11-02 14:47:20.283874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.386 [2024-11-02 14:47:20.283880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.386 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:28.386 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:28.386 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.952 [2024-11-02 14:47:20.722053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.952 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:28.952 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:28.952 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.952 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:29.210 Malloc1 00:31:29.210 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.468 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:29.726 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.983 [2024-11-02 14:47:21.979574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.983 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.548 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:30.549 14:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.549 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:30.549 fio-3.35 00:31:30.549 Starting 1 thread 00:31:33.078 00:31:33.078 test: (groupid=0, jobs=1): err= 0: pid=1486457: Sat Nov 2 14:47:24 2024 00:31:33.078 read: IOPS=8825, BW=34.5MiB/s (36.1MB/s)(69.2MiB/2007msec) 00:31:33.078 slat (nsec): min=1874, max=115378, avg=2668.78, stdev=1515.47 00:31:33.078 clat (usec): min=2123, max=14527, avg=7997.17, stdev=607.42 00:31:33.078 lat (usec): min=2147, max=14530, avg=7999.84, stdev=607.33 00:31:33.078 clat percentiles (usec): 00:31:33.078 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7504], 00:31:33.078 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:31:33.078 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:31:33.078 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[13435], 00:31:33.078 | 99.99th=[14484] 00:31:33.078 bw ( KiB/s): min=34360, max=36000, per=99.96%, avg=35290.00, stdev=686.74, samples=4 00:31:33.078 iops : min= 8590, max= 9000, avg=8822.50, stdev=171.68, samples=4 00:31:33.078 write: IOPS=8838, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec); 0 zone resets 00:31:33.078 slat (nsec): min=1984, max=88295, avg=2726.78, stdev=1145.36 00:31:33.078 clat (usec): min=1681, max=12465, avg=6446.39, stdev=538.60 00:31:33.078 lat (usec): min=1687, max=12467, avg=6449.12, stdev=538.55 00:31:33.078 clat percentiles (usec): 00:31:33.078 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:31:33.078 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:31:33.078 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:31:33.078 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10814], 99.95th=[11731], 00:31:33.078 | 99.99th=[11994] 00:31:33.078 bw ( KiB/s): min=35200, max=35544, per=100.00%, avg=35360.00, stdev=156.90, samples=4 00:31:33.078 iops : min= 8800, max= 8886, avg=8840.00, stdev=39.23, samples=4 00:31:33.078 lat (msec) : 2=0.02%, 4=0.11%, 10=99.72%, 20=0.16% 00:31:33.078 cpu : usr=59.82%, sys=35.09%, ctx=78, majf=0, minf=37 00:31:33.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:33.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.078 issued rwts: total=17713,17739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.078 00:31:33.078 Run status group 0 (all jobs): 00:31:33.078 READ: bw=34.5MiB/s (36.1MB/s), 34.5MiB/s-34.5MiB/s (36.1MB/s-36.1MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:31:33.078 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2007-2007msec 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:33.078 14:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:33.078 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:33.078 fio-3.35 00:31:33.078 Starting 1 thread 00:31:35.612 00:31:35.612 test: (groupid=0, jobs=1): err= 0: pid=1486785: Sat Nov 2 14:47:27 2024 00:31:35.612 read: IOPS=8066, BW=126MiB/s (132MB/s)(253MiB/2009msec) 00:31:35.612 slat (nsec): min=2791, max=96748, avg=3817.49, stdev=1714.73 00:31:35.612 clat (usec): min=2676, max=17277, avg=9307.18, stdev=2471.25 00:31:35.612 lat (usec): min=2680, max=17281, avg=9310.99, stdev=2471.26 00:31:35.612 clat percentiles (usec): 00:31:35.612 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7177], 00:31:35.612 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:31:35.612 | 70.00th=[10421], 80.00th=[11338], 90.00th=[12780], 95.00th=[13698], 00:31:35.612 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17171], 99.95th=[17171], 00:31:35.612 | 99.99th=[17171] 00:31:35.612 bw ( KiB/s): min=57824, max=76448, per=52.22%, avg=67392.00, stdev=8986.55, samples=4 00:31:35.612 iops : min= 3614, max= 4778, avg=4212.00, stdev=561.66, samples=4 00:31:35.612 write: IOPS=4730, BW=73.9MiB/s (77.5MB/s)(137MiB/1859msec); 0 zone resets 00:31:35.612 slat (usec): min=30, max=203, avg=34.23, stdev= 6.05 00:31:35.612 clat (usec): min=3881, max=19724, avg=11311.06, stdev=2032.98 00:31:35.612 lat (usec): min=3913, max=19755, avg=11345.28, stdev=2033.51 00:31:35.612 clat percentiles (usec): 00:31:35.612 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9765], 00:31:35.612 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:31:35.612 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14222], 95.00th=[15139], 00:31:35.612 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:31:35.612 | 99.99th=[19792] 00:31:35.612 bw ( KiB/s): min=60192, max=80224, per=92.46%, avg=69984.00, stdev=9793.71, samples=4 00:31:35.612 iops : min= 3762, max= 5014, avg=4374.00, stdev=612.11, samples=4 00:31:35.612 lat (msec) : 4=0.28%, 10=50.11%, 20=49.61% 00:31:35.612 cpu : usr=73.07%, sys=23.25%, ctx=24, majf=0, minf=57 00:31:35.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:35.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.613 issued rwts: total=16205,8794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.613 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.613 00:31:35.613 Run status group 0 (all jobs): 00:31:35.613 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (266MB), run=2009-2009msec 00:31:35.613 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=137MiB (144MB), run=1859-1859msec 00:31:35.613 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:35.870 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:35.871 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:35.871 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:35.871 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:35.871 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:39.150 Nvme0n1 00:31:39.150 14:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c437015d-198d-45c6-9925-ab90627d6a13 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c437015d-198d-45c6-9925-ab90627d6a13 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c437015d-198d-45c6-9925-ab90627d6a13 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:42.430 14:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:42.430 { 00:31:42.430 "uuid": "c437015d-198d-45c6-9925-ab90627d6a13", 00:31:42.430 "name": "lvs_0", 00:31:42.430 "base_bdev": "Nvme0n1", 00:31:42.430 "total_data_clusters": 930, 00:31:42.430 "free_clusters": 930, 00:31:42.430 "block_size": 512, 00:31:42.430 "cluster_size": 1073741824 00:31:42.430 } 00:31:42.430 ]' 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c437015d-198d-45c6-9925-ab90627d6a13") .free_clusters' 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c437015d-198d-45c6-9925-ab90627d6a13") .cluster_size' 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:42.430 952320 00:31:42.430 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:42.688 bd8e8257-d4d1-4de8-872a-6c8f6b72c880 00:31:42.688 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:42.945 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:43.203 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:43.461 14:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:43.719 fio-3.35 00:31:43.719 Starting 1 thread 00:31:46.321 00:31:46.321 test: (groupid=0, jobs=1): err= 0: pid=1488184: Sat Nov 2 14:47:38 2024 00:31:46.321 read: IOPS=5933, BW=23.2MiB/s (24.3MB/s)(46.5MiB/2008msec) 00:31:46.321 slat (usec): min=2, max=155, avg= 2.74, stdev= 2.13 00:31:46.321 clat (usec): min=995, max=171653, avg=11898.07, stdev=11703.43 00:31:46.321 lat (usec): min=998, max=171692, avg=11900.82, stdev=11703.73 00:31:46.321 clat percentiles (msec): 00:31:46.321 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:46.321 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:31:46.321 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:46.321 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:46.321 | 99.99th=[ 171] 00:31:46.321 bw ( KiB/s): min=16928, max=26112, per=99.76%, avg=23678.00, stdev=4505.73, samples=4 00:31:46.321 iops : min= 4232, max= 6528, avg=5919.50, stdev=1126.43, samples=4 00:31:46.321 write: IOPS=5927, BW=23.2MiB/s (24.3MB/s)(46.5MiB/2008msec); 0 zone resets 00:31:46.321 slat (usec): min=2, max=126, avg= 2.83, stdev= 1.65 00:31:46.321 clat (usec): min=404, max=169566, avg=9571.15, stdev=10981.05 00:31:46.321 lat (usec): min=407, max=169572, avg=9573.98, stdev=10981.36 00:31:46.321 clat percentiles (msec): 00:31:46.321 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:46.321 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:46.321 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:46.321 | 99.00th=[ 12], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:46.321 | 99.99th=[ 169] 00:31:46.321 bw ( KiB/s): min=17960, max=25728, per=99.97%, avg=23704.00, stdev=3832.23, samples=4 00:31:46.321 iops : min= 4490, max= 6432, avg=5926.00, stdev=958.06, samples=4 00:31:46.321 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:46.321 lat (msec) : 2=0.03%, 4=0.12%, 10=53.17%, 20=46.12%, 250=0.54% 00:31:46.321 cpu : usr=57.75%, sys=38.52%, ctx=106, majf=0, minf=37 00:31:46.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:46.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.321 issued rwts: total=11915,11903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.321 00:31:46.321 Run status group 0 (all jobs): 00:31:46.321 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2008-2008msec 00:31:46.321 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2008-2008msec 00:31:46.321 14:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:46.321 14:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=507ea91d-e914-4b8e-a45b-791a9db1d96f 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 507ea91d-e914-4b8e-a45b-791a9db1d96f 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=507ea91d-e914-4b8e-a45b-791a9db1d96f 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:47.704 { 00:31:47.704 "uuid": "c437015d-198d-45c6-9925-ab90627d6a13", 00:31:47.704 "name": "lvs_0", 00:31:47.704 "base_bdev": "Nvme0n1", 00:31:47.704 "total_data_clusters": 930, 00:31:47.704 "free_clusters": 0, 00:31:47.704 "block_size": 512, 00:31:47.704 "cluster_size": 1073741824 00:31:47.704 }, 00:31:47.704 { 00:31:47.704 "uuid": "507ea91d-e914-4b8e-a45b-791a9db1d96f", 00:31:47.704 "name": "lvs_n_0", 00:31:47.704 "base_bdev": "bd8e8257-d4d1-4de8-872a-6c8f6b72c880", 00:31:47.704 "total_data_clusters": 237847, 00:31:47.704 "free_clusters": 237847, 00:31:47.704 "block_size": 512, 00:31:47.704 "cluster_size": 4194304 00:31:47.704 } 00:31:47.704 ]' 00:31:47.704 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="507ea91d-e914-4b8e-a45b-791a9db1d96f") .free_clusters' 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="507ea91d-e914-4b8e-a45b-791a9db1d96f") .cluster_size' 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:47.961 951388 00:31:47.961 14:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:48.525 c7fb8d8d-e0e8-420e-8e08-5e621b85f29c 00:31:48.525 14:47:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:48.782 14:47:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:49.040 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.297 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:49.554 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:49.555 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:49.555 14:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.555 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:49.555 fio-3.35 00:31:49.555 Starting 1 thread 00:31:52.081 00:31:52.081 test: (groupid=0, jobs=1): err= 0: pid=1488924: Sat Nov 2 14:47:43 2024 00:31:52.081 read: IOPS=5674, BW=22.2MiB/s (23.2MB/s)(44.5MiB/2008msec) 00:31:52.081 slat (nsec): min=1957, max=177060, avg=2804.10, stdev=2589.00 00:31:52.081 clat (usec): min=4326, max=20583, avg=12447.31, stdev=1099.52 00:31:52.081 lat (usec): min=4356, max=20586, avg=12450.12, stdev=1099.36 00:31:52.081 clat percentiles (usec): 00:31:52.081 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:31:52.081 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12780], 00:31:52.081 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:31:52.081 | 99.00th=[14877], 99.50th=[15270], 99.90th=[19268], 99.95th=[20317], 00:31:52.081 | 99.99th=[20579] 00:31:52.081 bw ( KiB/s): min=21248, max=23168, per=99.78%, avg=22650.00, stdev=935.36, samples=4 00:31:52.081 iops : min= 5312, max= 5792, avg=5662.50, stdev=233.84, samples=4 00:31:52.081 write: IOPS=5645, BW=22.1MiB/s (23.1MB/s)(44.3MiB/2008msec); 0 zone resets 00:31:52.081 slat (usec): min=2, max=120, avg= 2.88, stdev= 1.81 00:31:52.081 clat (usec): min=3319, max=18692, avg=9946.87, stdev=929.88 00:31:52.081 lat (usec): min=3329, max=18695, avg=9949.74, stdev=929.84 00:31:52.081 clat percentiles (usec): 00:31:52.081 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:52.081 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:31:52.081 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:31:52.081 | 99.00th=[11994], 99.50th=[12387], 99.90th=[16057], 99.95th=[16319], 00:31:52.081 | 99.99th=[18744] 00:31:52.081 bw ( KiB/s): min=22208, max=22976, per=99.90%, avg=22560.00, stdev=320.00, samples=4 00:31:52.081 iops : min= 5552, max= 5744, avg=5640.00, stdev=80.00, samples=4 00:31:52.081 lat (msec) : 4=0.04%, 10=26.71%, 20=73.22%, 50=0.04% 00:31:52.081 cpu : usr=57.10%, sys=39.41%, ctx=87, majf=0, minf=37 00:31:52.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:52.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:52.081 issued rwts: total=11395,11337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:52.081 00:31:52.081 Run status group 0 (all jobs): 00:31:52.081 READ: bw=22.2MiB/s (23.2MB/s), 22.2MiB/s-22.2MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.7MB), run=2008-2008msec 00:31:52.081 WRITE: bw=22.1MiB/s (23.1MB/s), 22.1MiB/s-22.1MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.4MB), run=2008-2008msec 00:31:52.081 14:47:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:52.339 14:47:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:52.339 14:47:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:56.518 14:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:56.518 14:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:59.802 14:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:59.802 14:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.705 rmmod nvme_tcp 00:32:01.705 rmmod nvme_fabrics 00:32:01.705 rmmod nvme_keyring 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 1486092 ']' 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 1486092 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1486092 ']' 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1486092 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1486092 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1486092' 00:32:01.705 killing process with pid 1486092 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1486092 00:32:01.705 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1486092 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.965 14:47:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.500 00:32:04.500 real 0m38.171s 00:32:04.500 user 2m27.128s 00:32:04.500 sys 0m6.994s 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.500 ************************************ 00:32:04.500 END TEST nvmf_fio_host 00:32:04.500 ************************************ 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.500 14:47:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.500 ************************************ 00:32:04.500 START TEST nvmf_failover 00:32:04.500 ************************************ 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:04.500 * Looking for test storage... 00:32:04.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.500 --rc genhtml_branch_coverage=1 00:32:04.500 --rc genhtml_function_coverage=1 00:32:04.500 --rc genhtml_legend=1 00:32:04.500 --rc geninfo_all_blocks=1 00:32:04.500 --rc geninfo_unexecuted_blocks=1 00:32:04.500 00:32:04.500 ' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.500 --rc genhtml_branch_coverage=1 00:32:04.500 --rc genhtml_function_coverage=1 00:32:04.500 --rc genhtml_legend=1 00:32:04.500 --rc geninfo_all_blocks=1 00:32:04.500 --rc geninfo_unexecuted_blocks=1 00:32:04.500 00:32:04.500 ' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.500 --rc genhtml_branch_coverage=1 00:32:04.500 --rc genhtml_function_coverage=1 00:32:04.500 --rc genhtml_legend=1 00:32:04.500 --rc geninfo_all_blocks=1 00:32:04.500 --rc geninfo_unexecuted_blocks=1 00:32:04.500 00:32:04.500 ' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.500 --rc genhtml_branch_coverage=1 00:32:04.500 --rc genhtml_function_coverage=1 00:32:04.500 --rc genhtml_legend=1 00:32:04.500 --rc geninfo_all_blocks=1 00:32:04.500 --rc geninfo_unexecuted_blocks=1 00:32:04.500 00:32:04.500 ' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.500 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:04.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:04.501 14:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.406 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:06.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:06.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:06.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:06.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:32:06.407 00:32:06.407 --- 10.0.0.2 ping statistics --- 00:32:06.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.407 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:32:06.407 00:32:06.407 --- 10.0.0.1 ping statistics --- 00:32:06.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.407 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=1492182 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 1492182 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1492182 ']' 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.407 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.407 [2024-11-02 14:47:58.329849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:06.407 [2024-11-02 14:47:58.329936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.407 [2024-11-02 14:47:58.394912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:06.666 [2024-11-02 14:47:58.484381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.666 [2024-11-02 14:47:58.484446] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.666 [2024-11-02 14:47:58.484461] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.666 [2024-11-02 14:47:58.484473] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.666 [2024-11-02 14:47:58.484498] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.666 [2024-11-02 14:47:58.484557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.666 [2024-11-02 14:47:58.484610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.666 [2024-11-02 14:47:58.484614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.666 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:06.924 [2024-11-02 14:47:58.899407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.924 14:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:07.183 Malloc0 00:32:07.183 14:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:07.442 14:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.008 14:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.008 [2024-11-02 14:48:00.058721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.266 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:08.524 [2024-11-02 14:48:00.355547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:08.524 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:08.782 [2024-11-02 14:48:00.652449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1492547 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1492547 /var/tmp/bdevperf.sock 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1492547 ']' 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:08.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.782 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.040 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.040 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:09.040 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.297 NVMe0n1 00:32:09.297 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.865 00:32:09.865 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1492707 00:32:09.865 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:09.865 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:10.799 14:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.057 [2024-11-02 14:48:03.004812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.004991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.057 [2024-11-02 14:48:03.005393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.058 [2024-11-02 14:48:03.005405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.058 [2024-11-02 14:48:03.005416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.058 [2024-11-02 14:48:03.005429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7b40 is same with the state(6) to be set 00:32:11.058 14:48:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:14.343 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:14.601 00:32:14.601 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:14.858 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:18.143 14:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.143 [2024-11-02 14:48:10.003150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.143 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:19.075 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:19.335 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1492707 00:32:25.905 { 00:32:25.905 "results": [ 00:32:25.905 { 00:32:25.905 "job": "NVMe0n1", 00:32:25.905 "core_mask": "0x1", 00:32:25.905 "workload": "verify", 00:32:25.905 "status": "finished", 00:32:25.905 "verify_range": { 00:32:25.905 "start": 0, 00:32:25.905 "length": 16384 00:32:25.905 }, 00:32:25.905 "queue_depth": 128, 00:32:25.905 "io_size": 4096, 00:32:25.905 "runtime": 15.002761, 00:32:25.905 "iops": 8241.749635283799, 00:32:25.905 "mibps": 32.19433451282734, 00:32:25.905 "io_failed": 11741, 00:32:25.905 "io_timeout": 0, 00:32:25.905 "avg_latency_us": 14156.822115753392, 00:32:25.905 "min_latency_us": 831.3362962962963, 00:32:25.905 "max_latency_us": 15728.64 00:32:25.905 } 00:32:25.905 ], 00:32:25.905 "core_count": 1 00:32:25.905 } 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1492547 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1492547 ']' 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1492547 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:25.905 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1492547 00:32:25.906 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:25.906 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:25.906 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1492547' 00:32:25.906 killing process with pid 1492547 00:32:25.906 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1492547 00:32:25.906 14:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1492547 00:32:25.906 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:25.906 [2024-11-02 14:48:00.721025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:25.906 [2024-11-02 14:48:00.721122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492547 ] 00:32:25.906 [2024-11-02 14:48:00.783156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.906 [2024-11-02 14:48:00.870016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.906 Running I/O for 15 seconds... 00:32:25.906 8290.00 IOPS, 32.38 MiB/s [2024-11-02T13:48:17.961Z] [2024-11-02 14:48:03.006039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.006975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.006990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.007003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.007017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.007030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.007044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.007057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.007072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.007085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.906 [2024-11-02 14:48:03.007099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.906 [2024-11-02 14:48:03.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.907 [2024-11-02 14:48:03.007543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.007969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.007981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.907 [2024-11-02 14:48:03.008315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.907 [2024-11-02 14:48:03.008329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.908 [2024-11-02 14:48:03.008959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.008997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:32:25.908 [2024-11-02 14:48:03.009470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.908 [2024-11-02 14:48:03.009482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.908 [2024-11-02 14:48:03.009493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.908 [2024-11-02 14:48:03.009503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.009960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.009971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.009982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.009994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.909 [2024-11-02 14:48:03.010459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.909 [2024-11-02 14:48:03.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:32:25.909 [2024-11-02 14:48:03.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010540] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x70b560 was disconnected and freed. reset controller. 00:32:25.909 [2024-11-02 14:48:03.010567] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:25.909 [2024-11-02 14:48:03.010601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.909 [2024-11-02 14:48:03.010619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.909 [2024-11-02 14:48:03.010652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.909 [2024-11-02 14:48:03.010678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.909 [2024-11-02 14:48:03.010704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.909 [2024-11-02 14:48:03.010716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.910 [2024-11-02 14:48:03.010776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6eaf90 (9): Bad file descriptor 00:32:25.910 [2024-11-02 14:48:03.014013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.910 [2024-11-02 14:48:03.049917] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:25.910 8218.00 IOPS, 32.10 MiB/s [2024-11-02T13:48:17.965Z] 8278.00 IOPS, 32.34 MiB/s [2024-11-02T13:48:17.965Z] 8333.00 IOPS, 32.55 MiB/s [2024-11-02T13:48:17.965Z] [2024-11-02 14:48:06.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.719979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.719992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.910 [2024-11-02 14:48:06.720169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.910 [2024-11-02 14:48:06.720182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.720983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.720999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.911 [2024-11-02 14:48:06.721176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.911 [2024-11-02 14:48:06.721365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.911 [2024-11-02 14:48:06.721379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.721986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.721999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.912 [2024-11-02 14:48:06.722526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.912 [2024-11-02 14:48:06.722539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.913 [2024-11-02 14:48:06.722567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.913 [2024-11-02 14:48:06.722595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74376 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74384 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74392 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74400 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74408 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74416 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74424 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.722959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.722972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.722982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.722993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74432 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74440 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74448 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74456 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74464 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73976 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.913 [2024-11-02 14:48:06.723284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.913 [2024-11-02 14:48:06.723295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73984 len:8 PRP1 0x0 PRP2 0x0 00:32:25.913 [2024-11-02 14:48:06.723308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723363] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x70d6d0 was disconnected and freed. reset controller. 00:32:25.913 [2024-11-02 14:48:06.723381] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:25.913 [2024-11-02 14:48:06.723415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.913 [2024-11-02 14:48:06.723433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.913 [2024-11-02 14:48:06.723461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.913 [2024-11-02 14:48:06.723487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.913 [2024-11-02 14:48:06.723513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:06.723525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.913 [2024-11-02 14:48:06.723588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6eaf90 (9): Bad file descriptor 00:32:25.913 [2024-11-02 14:48:06.726789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.913 [2024-11-02 14:48:06.802289] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:25.913 8215.00 IOPS, 32.09 MiB/s [2024-11-02T13:48:17.968Z] 8242.33 IOPS, 32.20 MiB/s [2024-11-02T13:48:17.968Z] 8285.29 IOPS, 32.36 MiB/s [2024-11-02T13:48:17.968Z] 8311.75 IOPS, 32.47 MiB/s [2024-11-02T13:48:17.968Z] 8314.44 IOPS, 32.48 MiB/s [2024-11-02T13:48:17.968Z] [2024-11-02 14:48:11.287722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.913 [2024-11-02 14:48:11.287801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.287830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.913 [2024-11-02 14:48:11.287846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.287863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.913 [2024-11-02 14:48:11.287879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.287894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-11-02 14:48:11.287919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.287936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-11-02 14:48:11.287950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.287980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-11-02 14:48:11.287993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.288008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-11-02 14:48:11.288022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.913 [2024-11-02 14:48:11.288036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.914 [2024-11-02 14:48:11.288395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.288974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.288988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.289001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.289016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.289029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.914 [2024-11-02 14:48:11.289044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-11-02 14:48:11.289066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.915 [2024-11-02 14:48:11.289497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.289976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.289989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.915 [2024-11-02 14:48:11.290228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.915 [2024-11-02 14:48:11.290241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.290975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.290988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.916 [2024-11-02 14:48:11.291418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.916 [2024-11-02 14:48:11.291432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.917 [2024-11-02 14:48:11.291460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.917 [2024-11-02 14:48:11.291489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.917 [2024-11-02 14:48:11.291519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.917 [2024-11-02 14:48:11.291547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.917 [2024-11-02 14:48:11.291576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70e0f0 is same with the state(6) to be set 00:32:25.917 [2024-11-02 14:48:11.291608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.917 [2024-11-02 14:48:11.291620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.917 [2024-11-02 14:48:11.291631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6424 len:8 PRP1 0x0 PRP2 0x0 00:32:25.917 [2024-11-02 14:48:11.291644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291708] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x70e0f0 was disconnected and freed. reset controller. 00:32:25.917 [2024-11-02 14:48:11.291726] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:25.917 [2024-11-02 14:48:11.291764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.917 [2024-11-02 14:48:11.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.917 [2024-11-02 14:48:11.291811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.917 [2024-11-02 14:48:11.291837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.917 [2024-11-02 14:48:11.291862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.917 [2024-11-02 14:48:11.291875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.917 [2024-11-02 14:48:11.291915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6eaf90 (9): Bad file descriptor 00:32:25.917 [2024-11-02 14:48:11.295171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.917 [2024-11-02 14:48:11.484490] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:25.917 8160.30 IOPS, 31.88 MiB/s [2024-11-02T13:48:17.972Z] 8190.64 IOPS, 31.99 MiB/s [2024-11-02T13:48:17.972Z] 8206.17 IOPS, 32.06 MiB/s [2024-11-02T13:48:17.972Z] 8218.15 IOPS, 32.10 MiB/s [2024-11-02T13:48:17.972Z] 8231.86 IOPS, 32.16 MiB/s 00:32:25.917 Latency(us) 00:32:25.917 [2024-11-02T13:48:17.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:25.917 Verification LBA range: start 0x0 length 0x4000 00:32:25.917 NVMe0n1 : 15.00 8241.75 32.19 782.59 0.00 14156.82 831.34 15728.64 00:32:25.917 [2024-11-02T13:48:17.972Z] =================================================================================================================== 00:32:25.917 [2024-11-02T13:48:17.972Z] Total : 8241.75 32.19 782.59 0.00 14156.82 831.34 15728.64 00:32:25.917 Received shutdown signal, test time was about 15.000000 seconds 00:32:25.917 00:32:25.917 Latency(us) 00:32:25.917 [2024-11-02T13:48:17.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.917 [2024-11-02T13:48:17.972Z] =================================================================================================================== 00:32:25.917 [2024-11-02T13:48:17.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1495053 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1495053 /var/tmp/bdevperf.sock 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1495053 ']' 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:25.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:25.917 [2024-11-02 14:48:17.687921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:25.917 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:26.175 [2024-11-02 14:48:17.956665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:26.176 14:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.433 NVMe0n1 00:32:26.433 14:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.999 00:32:26.999 14:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:27.257 00:32:27.257 14:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:27.257 14:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:27.515 14:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:27.777 14:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:31.133 14:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:31.133 14:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:31.133 14:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1495729 00:32:31.133 14:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:31.133 14:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1495729 00:32:32.510 { 00:32:32.510 "results": [ 00:32:32.510 { 00:32:32.510 "job": "NVMe0n1", 00:32:32.510 "core_mask": "0x1", 00:32:32.510 "workload": "verify", 00:32:32.510 "status": "finished", 00:32:32.510 "verify_range": { 00:32:32.510 "start": 0, 00:32:32.510 "length": 16384 00:32:32.510 }, 00:32:32.510 "queue_depth": 128, 00:32:32.510 "io_size": 4096, 00:32:32.510 "runtime": 1.007717, 00:32:32.510 "iops": 7363.17835265258, 00:32:32.510 "mibps": 28.76241544004914, 00:32:32.510 "io_failed": 0, 00:32:32.510 "io_timeout": 0, 00:32:32.510 "avg_latency_us": 17295.231399420987, 00:32:32.510 "min_latency_us": 3398.162962962963, 00:32:32.510 "max_latency_us": 16505.36296296296 00:32:32.510 } 00:32:32.510 ], 00:32:32.510 "core_count": 1 00:32:32.510 } 00:32:32.510 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:32.510 [2024-11-02 14:48:17.185121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:32.510 [2024-11-02 14:48:17.185215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495053 ] 00:32:32.510 [2024-11-02 14:48:17.244806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.510 [2024-11-02 14:48:17.328440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.510 [2024-11-02 14:48:19.714903] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:32.510 [2024-11-02 14:48:19.714994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.510 [2024-11-02 14:48:19.715017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.510 [2024-11-02 14:48:19.715049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.510 [2024-11-02 14:48:19.715063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.510 [2024-11-02 14:48:19.715077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.510 [2024-11-02 14:48:19.715091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.510 [2024-11-02 14:48:19.715105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.510 [2024-11-02 14:48:19.715118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.510 [2024-11-02 14:48:19.715132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:32.510 [2024-11-02 14:48:19.715176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:32.510 [2024-11-02 14:48:19.715213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42f90 (9): Bad file descriptor 00:32:32.510 [2024-11-02 14:48:19.723593] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:32.510 Running I/O for 1 seconds... 00:32:32.510 7292.00 IOPS, 28.48 MiB/s 00:32:32.510 Latency(us) 00:32:32.510 [2024-11-02T13:48:24.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.510 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:32.510 Verification LBA range: start 0x0 length 0x4000 00:32:32.510 NVMe0n1 : 1.01 7363.18 28.76 0.00 0.00 17295.23 3398.16 16505.36 00:32:32.510 [2024-11-02T13:48:24.565Z] =================================================================================================================== 00:32:32.510 [2024-11-02T13:48:24.565Z] Total : 7363.18 28.76 0.00 0.00 17295.23 3398.16 16505.36 00:32:32.510 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:32.510 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:32.510 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:32.768 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:32.768 14:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:33.025 14:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:33.283 14:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1495053 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1495053 ']' 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1495053 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.570 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495053 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495053' 00:32:36.828 killing process with pid 1495053 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1495053 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1495053 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:36.828 14:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.086 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.086 rmmod nvme_tcp 00:32:37.346 rmmod nvme_fabrics 00:32:37.346 rmmod nvme_keyring 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 1492182 ']' 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 1492182 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1492182 ']' 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1492182 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1492182 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1492182' 00:32:37.346 killing process with pid 1492182 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1492182 00:32:37.346 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1492182 00:32:37.606 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:37.606 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:37.606 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:37.606 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.607 14:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.511 00:32:39.511 real 0m35.530s 00:32:39.511 user 2m5.749s 00:32:39.511 sys 0m5.843s 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:39.511 ************************************ 00:32:39.511 END TEST nvmf_failover 00:32:39.511 ************************************ 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.511 14:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.771 ************************************ 00:32:39.771 START TEST nvmf_host_discovery 00:32:39.771 ************************************ 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:39.771 * Looking for test storage... 00:32:39.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.771 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:39.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.772 --rc genhtml_branch_coverage=1 00:32:39.772 --rc genhtml_function_coverage=1 00:32:39.772 --rc genhtml_legend=1 00:32:39.772 --rc geninfo_all_blocks=1 00:32:39.772 --rc geninfo_unexecuted_blocks=1 00:32:39.772 00:32:39.772 ' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:39.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.772 --rc genhtml_branch_coverage=1 00:32:39.772 --rc genhtml_function_coverage=1 00:32:39.772 --rc genhtml_legend=1 00:32:39.772 --rc geninfo_all_blocks=1 00:32:39.772 --rc geninfo_unexecuted_blocks=1 00:32:39.772 00:32:39.772 ' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:39.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.772 --rc genhtml_branch_coverage=1 00:32:39.772 --rc genhtml_function_coverage=1 00:32:39.772 --rc genhtml_legend=1 00:32:39.772 --rc geninfo_all_blocks=1 00:32:39.772 --rc geninfo_unexecuted_blocks=1 00:32:39.772 00:32:39.772 ' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:39.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.772 --rc genhtml_branch_coverage=1 00:32:39.772 --rc genhtml_function_coverage=1 00:32:39.772 --rc genhtml_legend=1 00:32:39.772 --rc geninfo_all_blocks=1 00:32:39.772 --rc geninfo_unexecuted_blocks=1 00:32:39.772 00:32:39.772 ' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:39.772 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.773 14:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.308 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:42.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:42.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:42.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:42.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:32:42.309 00:32:42.309 --- 10.0.0.2 ping statistics --- 00:32:42.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.309 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:32:42.309 00:32:42.309 --- 10.0.0.1 ping statistics --- 00:32:42.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.309 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=1498455 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 1498455 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1498455 ']' 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.309 14:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.309 [2024-11-02 14:48:34.014347] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:42.310 [2024-11-02 14:48:34.014449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.310 [2024-11-02 14:48:34.079458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.310 [2024-11-02 14:48:34.169105] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.310 [2024-11-02 14:48:34.169168] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.310 [2024-11-02 14:48:34.169195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.310 [2024-11-02 14:48:34.169207] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.310 [2024-11-02 14:48:34.169216] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.310 [2024-11-02 14:48:34.169253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 [2024-11-02 14:48:34.320135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 [2024-11-02 14:48:34.328390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 null0 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 null1 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1498475 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1498475 /tmp/host.sock 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1498475 ']' 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:42.310 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.310 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.568 [2024-11-02 14:48:34.403650] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:42.568 [2024-11-02 14:48:34.403730] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498475 ] 00:32:42.568 [2024-11-02 14:48:34.464893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.568 [2024-11-02 14:48:34.559664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.827 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.085 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 [2024-11-02 14:48:34.941968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:43.086 14:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:43.086 14:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:44.022 [2024-11-02 14:48:35.739194] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:44.022 [2024-11-02 14:48:35.739229] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:44.022 [2024-11-02 14:48:35.739265] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:44.022 [2024-11-02 14:48:35.866694] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:44.022 [2024-11-02 14:48:36.009671] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:44.022 [2024-11-02 14:48:36.009698] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.281 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.541 [2024-11-02 14:48:36.394150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:44.541 [2024-11-02 14:48:36.394791] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:44.541 [2024-11-02 14:48:36.394825] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.541 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.542 [2024-11-02 14:48:36.520738] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:44.542 14:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:44.800 [2024-11-02 14:48:36.824357] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:44.800 [2024-11-02 14:48:36.824389] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:44.800 [2024-11-02 14:48:36.824399] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.738 [2024-11-02 14:48:37.622643] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:45.738 [2024-11-02 14:48:37.622700] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.738 [2024-11-02 14:48:37.631541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.738 [2024-11-02 14:48:37.631578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.738 [2024-11-02 14:48:37.631602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.738 [2024-11-02 14:48:37.631617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.738 [2024-11-02 14:48:37.631631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.738 [2024-11-02 14:48:37.631644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.738 [2024-11-02 14:48:37.631658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.738 [2024-11-02 14:48:37.631671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.738 [2024-11-02 14:48:37.631684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.738 [2024-11-02 14:48:37.641528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.738 [2024-11-02 14:48:37.651581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.738 [2024-11-02 14:48:37.651896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.738 [2024-11-02 14:48:37.651926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.738 [2024-11-02 14:48:37.651943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.738 [2024-11-02 14:48:37.651967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.738 [2024-11-02 14:48:37.652001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.738 [2024-11-02 14:48:37.652019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.738 [2024-11-02 14:48:37.652037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.738 [2024-11-02 14:48:37.652058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.738 [2024-11-02 14:48:37.661676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.738 [2024-11-02 14:48:37.661903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.738 [2024-11-02 14:48:37.661932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.738 [2024-11-02 14:48:37.661949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.738 [2024-11-02 14:48:37.661971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.738 [2024-11-02 14:48:37.661991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.738 [2024-11-02 14:48:37.662004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.738 [2024-11-02 14:48:37.662017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.738 [2024-11-02 14:48:37.662036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:45.738 [2024-11-02 14:48:37.671761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:45.738 [2024-11-02 14:48:37.671997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.738 [2024-11-02 14:48:37.672027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.738 [2024-11-02 14:48:37.672044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.738 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.738 [2024-11-02 14:48:37.672066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.739 [2024-11-02 14:48:37.672092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.739 [2024-11-02 14:48:37.672106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.739 [2024-11-02 14:48:37.672120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.739 [2024-11-02 14:48:37.672151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:45.739 [2024-11-02 14:48:37.681850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.739 [2024-11-02 14:48:37.682078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.739 [2024-11-02 14:48:37.682108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.739 [2024-11-02 14:48:37.682125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.739 [2024-11-02 14:48:37.682148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.739 [2024-11-02 14:48:37.682179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.739 [2024-11-02 14:48:37.682196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.739 [2024-11-02 14:48:37.682210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.739 [2024-11-02 14:48:37.682229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.739 [2024-11-02 14:48:37.691939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.739 [2024-11-02 14:48:37.692152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.739 [2024-11-02 14:48:37.692180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.739 [2024-11-02 14:48:37.692202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.739 [2024-11-02 14:48:37.692224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.739 [2024-11-02 14:48:37.692265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.739 [2024-11-02 14:48:37.692283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.739 [2024-11-02 14:48:37.692297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.739 [2024-11-02 14:48:37.692317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.739 [2024-11-02 14:48:37.702024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.739 [2024-11-02 14:48:37.702222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.739 [2024-11-02 14:48:37.702250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2a850 with addr=10.0.0.2, port=4420 00:32:45.739 [2024-11-02 14:48:37.702277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a850 is same with the state(6) to be set 00:32:45.739 [2024-11-02 14:48:37.702300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a850 (9): Bad file descriptor 00:32:45.739 [2024-11-02 14:48:37.702332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.739 [2024-11-02 14:48:37.702350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.739 [2024-11-02 14:48:37.702363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.739 [2024-11-02 14:48:37.702383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.739 [2024-11-02 14:48:37.709401] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:45.739 [2024-11-02 14:48:37.709430] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.739 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.000 14:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.376 [2024-11-02 14:48:38.997144] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:47.376 [2024-11-02 14:48:38.997170] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:47.376 [2024-11-02 14:48:38.997196] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.376 [2024-11-02 14:48:39.125657] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:47.376 [2024-11-02 14:48:39.232454] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.376 [2024-11-02 14:48:39.232487] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.376 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 request: 00:32:47.377 { 00:32:47.377 "name": "nvme", 00:32:47.377 "trtype": "tcp", 00:32:47.377 "traddr": "10.0.0.2", 00:32:47.377 "adrfam": "ipv4", 00:32:47.377 "trsvcid": "8009", 00:32:47.377 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:47.377 "wait_for_attach": true, 00:32:47.377 "method": "bdev_nvme_start_discovery", 00:32:47.377 "req_id": 1 00:32:47.377 } 00:32:47.377 Got JSON-RPC error response 00:32:47.377 response: 00:32:47.377 { 00:32:47.377 "code": -17, 00:32:47.377 "message": "File exists" 00:32:47.377 } 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 request: 00:32:47.377 { 00:32:47.377 "name": "nvme_second", 00:32:47.377 "trtype": "tcp", 00:32:47.377 "traddr": "10.0.0.2", 00:32:47.377 "adrfam": "ipv4", 00:32:47.377 "trsvcid": "8009", 00:32:47.377 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:47.377 "wait_for_attach": true, 00:32:47.377 "method": "bdev_nvme_start_discovery", 00:32:47.377 "req_id": 1 00:32:47.377 } 00:32:47.377 Got JSON-RPC error response 00:32:47.377 response: 00:32:47.377 { 00:32:47.377 "code": -17, 00:32:47.377 "message": "File exists" 00:32:47.377 } 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.377 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.755 [2024-11-02 14:48:40.428216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.755 [2024-11-02 14:48:40.428321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe64670 with addr=10.0.0.2, port=8010 00:32:48.755 [2024-11-02 14:48:40.428358] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:48.755 [2024-11-02 14:48:40.428374] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:48.755 [2024-11-02 14:48:40.428388] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:49.690 [2024-11-02 14:48:41.430712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.690 [2024-11-02 14:48:41.430794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe64670 with addr=10.0.0.2, port=8010 00:32:49.690 [2024-11-02 14:48:41.430831] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:49.690 [2024-11-02 14:48:41.430848] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:49.690 [2024-11-02 14:48:41.430863] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:50.627 [2024-11-02 14:48:42.432810] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:50.627 request: 00:32:50.627 { 00:32:50.627 "name": "nvme_second", 00:32:50.627 "trtype": "tcp", 00:32:50.627 "traddr": "10.0.0.2", 00:32:50.627 "adrfam": "ipv4", 00:32:50.627 "trsvcid": "8010", 00:32:50.627 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:50.627 "wait_for_attach": false, 00:32:50.627 "attach_timeout_ms": 3000, 00:32:50.627 "method": "bdev_nvme_start_discovery", 00:32:50.627 "req_id": 1 00:32:50.627 } 00:32:50.627 Got JSON-RPC error response 00:32:50.627 response: 00:32:50.627 { 00:32:50.627 "code": -110, 00:32:50.627 "message": "Connection timed out" 00:32:50.627 } 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1498475 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.627 rmmod nvme_tcp 00:32:50.627 rmmod nvme_fabrics 00:32:50.627 rmmod nvme_keyring 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 1498455 ']' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 1498455 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1498455 ']' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1498455 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1498455 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1498455' 00:32:50.627 killing process with pid 1498455 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1498455 00:32:50.627 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1498455 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.886 14:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.419 00:32:53.419 real 0m13.294s 00:32:53.419 user 0m19.074s 00:32:53.419 sys 0m2.875s 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.419 ************************************ 00:32:53.419 END TEST nvmf_host_discovery 00:32:53.419 ************************************ 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.419 ************************************ 00:32:53.419 START TEST nvmf_host_multipath_status 00:32:53.419 ************************************ 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:53.419 * Looking for test storage... 00:32:53.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:32:53.419 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:53.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.419 --rc genhtml_branch_coverage=1 00:32:53.419 --rc genhtml_function_coverage=1 00:32:53.419 --rc genhtml_legend=1 00:32:53.419 --rc geninfo_all_blocks=1 00:32:53.419 --rc geninfo_unexecuted_blocks=1 00:32:53.419 00:32:53.419 ' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:53.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.419 --rc genhtml_branch_coverage=1 00:32:53.419 --rc genhtml_function_coverage=1 00:32:53.419 --rc genhtml_legend=1 00:32:53.419 --rc geninfo_all_blocks=1 00:32:53.419 --rc geninfo_unexecuted_blocks=1 00:32:53.419 00:32:53.419 ' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:53.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.419 --rc genhtml_branch_coverage=1 00:32:53.419 --rc genhtml_function_coverage=1 00:32:53.419 --rc genhtml_legend=1 00:32:53.419 --rc geninfo_all_blocks=1 00:32:53.419 --rc geninfo_unexecuted_blocks=1 00:32:53.419 00:32:53.419 ' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:53.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.419 --rc genhtml_branch_coverage=1 00:32:53.419 --rc genhtml_function_coverage=1 00:32:53.419 --rc genhtml_legend=1 00:32:53.419 --rc geninfo_all_blocks=1 00:32:53.419 --rc geninfo_unexecuted_blocks=1 00:32:53.419 00:32:53.419 ' 00:32:53.419 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:53.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.420 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:55.326 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:55.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:55.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:55.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.326 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:32:55.327 00:32:55.327 --- 10.0.0.2 ping statistics --- 00:32:55.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.327 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:32:55.327 00:32:55.327 --- 10.0.0.1 ping statistics --- 00:32:55.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.327 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=1501515 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 1501515 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1501515 ']' 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.327 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 [2024-11-02 14:48:47.321500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:55.327 [2024-11-02 14:48:47.321610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.585 [2024-11-02 14:48:47.393642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:55.585 [2024-11-02 14:48:47.488250] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.585 [2024-11-02 14:48:47.488313] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.585 [2024-11-02 14:48:47.488327] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.585 [2024-11-02 14:48:47.488339] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.585 [2024-11-02 14:48:47.488350] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.585 [2024-11-02 14:48:47.488410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.585 [2024-11-02 14:48:47.488415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1501515 00:32:55.585 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:55.843 [2024-11-02 14:48:47.868789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.843 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:56.409 Malloc0 00:32:56.409 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:56.409 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.977 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.977 [2024-11-02 14:48:48.973014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.977 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:57.234 [2024-11-02 14:48:49.233737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:57.234 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1501795 00:32:57.234 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:57.234 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1501795 /var/tmp/bdevperf.sock 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1501795 ']' 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.235 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.803 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.803 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:57.803 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:57.803 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:58.370 Nvme0n1 00:32:58.370 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:58.939 Nvme0n1 00:32:58.939 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:58.939 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:00.842 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:00.842 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:01.100 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:01.669 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:02.603 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:02.603 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:02.603 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.603 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.863 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.863 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.863 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.863 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:03.120 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.120 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:03.120 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.120 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:03.378 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.378 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:03.378 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.378 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:03.635 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.635 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:03.635 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.635 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.892 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.892 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.892 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.892 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:04.158 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.158 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:04.158 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:04.470 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:04.732 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:05.669 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:05.669 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:05.669 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.669 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.927 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.927 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:05.927 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.927 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:06.185 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.185 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:06.185 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.185 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.751 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:07.317 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:07.575 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:08.142 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:09.078 14:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:09.078 14:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:09.078 14:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.078 14:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:09.336 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.336 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:09.336 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.336 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:09.594 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.594 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:09.594 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.594 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.852 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.852 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.852 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.852 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.110 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.110 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:10.110 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.110 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.367 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.368 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:10.368 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.368 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.625 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.625 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:10.625 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:10.884 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:11.142 14:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.522 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.780 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.780 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.780 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.780 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.038 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.038 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.038 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.038 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.296 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.296 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:13.296 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.296 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.554 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.554 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:13.554 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.554 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.812 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.812 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:13.812 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:14.071 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:14.330 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:15.706 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:15.706 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:15.706 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.706 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.707 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.707 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:15.707 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.707 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.964 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.964 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.965 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.965 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:16.222 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.222 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:16.222 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.222 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.480 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.480 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:16.480 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.480 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.738 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.738 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:16.738 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.738 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.996 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.997 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:16.997 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:17.254 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:17.514 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.894 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.152 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.152 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.152 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.152 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.410 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.410 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.410 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.410 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.668 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.668 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:19.668 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.668 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.926 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.926 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.926 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.926 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.183 14:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.183 14:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:20.748 14:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:20.748 14:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:20.748 14:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:21.006 14:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.380 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:22.638 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.638 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:22.638 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.638 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.896 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.896 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.896 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.896 14:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.154 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.154 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:23.154 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.154 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:23.721 14:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.287 14:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:24.287 14:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:25.661 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.662 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:25.920 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.920 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:25.920 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.920 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.178 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.178 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.178 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.178 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.437 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.437 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.437 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.437 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.695 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.695 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:26.695 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.695 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:26.953 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.953 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:26.953 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.211 14:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:27.470 14:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.844 14:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.102 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.102 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.102 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.102 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.360 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.360 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.360 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.360 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.623 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.623 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:29.623 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.623 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.882 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.882 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:29.882 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.882 14:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.449 14:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.449 14:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:30.449 14:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:30.449 14:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:31.016 14:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:31.951 14:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:31.951 14:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:31.951 14:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.951 14:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.209 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.209 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:32.209 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.209 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.467 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.467 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.467 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.467 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:32.726 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.726 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:32.726 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.726 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:32.984 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.984 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:32.984 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.984 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.242 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.242 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:33.242 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.242 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1501795 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1501795 ']' 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1501795 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501795 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501795' 00:33:33.501 killing process with pid 1501795 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1501795 00:33:33.501 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1501795 00:33:33.501 { 00:33:33.501 "results": [ 00:33:33.501 { 00:33:33.501 "job": "Nvme0n1", 00:33:33.501 "core_mask": "0x4", 00:33:33.501 "workload": "verify", 00:33:33.501 "status": "terminated", 00:33:33.501 "verify_range": { 00:33:33.501 "start": 0, 00:33:33.501 "length": 16384 00:33:33.501 }, 00:33:33.501 "queue_depth": 128, 00:33:33.501 "io_size": 4096, 00:33:33.501 "runtime": 34.465129, 00:33:33.501 "iops": 7761.729253936638, 00:33:33.501 "mibps": 30.319254898189993, 00:33:33.501 "io_failed": 0, 00:33:33.501 "io_timeout": 0, 00:33:33.501 "avg_latency_us": 16461.79023387652, 00:33:33.501 "min_latency_us": 221.4874074074074, 00:33:33.501 "max_latency_us": 4101097.2444444443 00:33:33.501 } 00:33:33.501 ], 00:33:33.501 "core_count": 1 00:33:33.501 } 00:33:33.786 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1501795 00:33:33.787 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:33.787 [2024-11-02 14:48:49.300043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:33.787 [2024-11-02 14:48:49.300131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501795 ] 00:33:33.787 [2024-11-02 14:48:49.358842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.787 [2024-11-02 14:48:49.445671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.787 [2024-11-02 14:48:50.756980] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:33:33.787 Running I/O for 90 seconds... 00:33:33.787 8292.00 IOPS, 32.39 MiB/s [2024-11-02T13:49:25.842Z] 8419.50 IOPS, 32.89 MiB/s [2024-11-02T13:49:25.842Z] 8410.33 IOPS, 32.85 MiB/s [2024-11-02T13:49:25.842Z] 8405.75 IOPS, 32.83 MiB/s [2024-11-02T13:49:25.842Z] 8375.40 IOPS, 32.72 MiB/s [2024-11-02T13:49:25.842Z] 8221.17 IOPS, 32.11 MiB/s [2024-11-02T13:49:25.842Z] 8186.57 IOPS, 31.98 MiB/s [2024-11-02T13:49:25.842Z] 8210.50 IOPS, 32.07 MiB/s [2024-11-02T13:49:25.842Z] 8214.67 IOPS, 32.09 MiB/s [2024-11-02T13:49:25.842Z] 8207.00 IOPS, 32.06 MiB/s [2024-11-02T13:49:25.842Z] 8225.91 IOPS, 32.13 MiB/s [2024-11-02T13:49:25.842Z] 8239.67 IOPS, 32.19 MiB/s [2024-11-02T13:49:25.842Z] 8255.46 IOPS, 32.25 MiB/s [2024-11-02T13:49:25.842Z] 8263.29 IOPS, 32.28 MiB/s [2024-11-02T13:49:25.842Z] 8264.47 IOPS, 32.28 MiB/s [2024-11-02T13:49:25.842Z] [2024-11-02 14:49:06.083207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.787 [2024-11-02 14:49:06.083289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.083914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.083930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.084871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.084896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.084941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.084965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.084981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.787 [2024-11-02 14:49:06.085541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.787 [2024-11-02 14:49:06.085563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.085579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.085622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.085697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.085735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.085984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.788 [2024-11-02 14:49:06.085999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.086971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.086993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.788 [2024-11-02 14:49:06.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.788 [2024-11-02 14:49:06.087463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.087966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.087982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.088523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.088538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.789 [2024-11-02 14:49:06.089531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.789 [2024-11-02 14:49:06.089552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.089977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.089999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-11-02 14:49:06.090293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.090973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.090989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.091010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.091026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.091048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.091064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.790 [2024-11-02 14:49:06.091086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.790 [2024-11-02 14:49:06.091102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.091645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.091661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-11-02 14:49:06.092782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.092970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.092992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.093029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.093067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.093109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.093147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.791 [2024-11-02 14:49:06.093185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.791 [2024-11-02 14:49:06.093200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.093970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.093992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.792 [2024-11-02 14:49:06.094686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.792 [2024-11-02 14:49:06.094723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.094738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.094760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.094775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.094797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.094812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.094849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.094872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.094887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.095975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.095991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.793 [2024-11-02 14:49:06.096768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.793 [2024-11-02 14:49:06.096903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.793 [2024-11-02 14:49:06.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.096941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.096956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.096978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.096993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.097978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.097999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.098058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.098100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.098883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.794 [2024-11-02 14:49:06.098929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.098967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.098989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.099028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.099066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.099112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.099151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.794 [2024-11-02 14:49:06.099189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.794 [2024-11-02 14:49:06.099205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.099971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.099987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.795 [2024-11-02 14:49:06.100549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.795 [2024-11-02 14:49:06.100565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.100975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.100991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.101983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.796 [2024-11-02 14:49:06.102820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.796 [2024-11-02 14:49:06.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.102860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.102883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.102900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.102927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.102943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.102965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.102981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.797 [2024-11-02 14:49:06.103244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.103951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.103967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.797 [2024-11-02 14:49:06.104426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.797 [2024-11-02 14:49:06.104452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.104469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.104490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.104506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.104528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.104560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.104582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.104635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.104650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.798 [2024-11-02 14:49:06.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.105966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.105988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.798 [2024-11-02 14:49:06.106838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.798 [2024-11-02 14:49:06.106854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.106875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.106891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.106913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.106929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.106951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.106967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.107917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.107932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.108964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.108980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.109002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.109018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.109040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.799 [2024-11-02 14:49:06.109072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.799 [2024-11-02 14:49:06.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.109910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.800 [2024-11-02 14:49:06.109948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.109971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.800 [2024-11-02 14:49:06.110554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.800 [2024-11-02 14:49:06.110570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.110959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.110989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.111220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.111235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.801 [2024-11-02 14:49:06.112478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.801 [2024-11-02 14:49:06.112860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.801 [2024-11-02 14:49:06.112880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.112895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.112916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.112931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.112951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.112966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.112987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.113975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.113991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.802 [2024-11-02 14:49:06.114213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.802 [2024-11-02 14:49:06.114229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.114490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.114507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.115969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.115985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.803 [2024-11-02 14:49:06.116507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.803 [2024-11-02 14:49:06.116529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.804 [2024-11-02 14:49:06.116614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.116966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.116989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.117902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.117917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.118701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.118746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.118785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.118823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.804 [2024-11-02 14:49:06.118861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.804 [2024-11-02 14:49:06.118908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.804 [2024-11-02 14:49:06.118930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.118946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.118984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.119022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.119059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.119102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.805 [2024-11-02 14:49:06.119142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.119956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.119993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.805 [2024-11-02 14:49:06.120515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.805 [2024-11-02 14:49:06.120530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.120964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.120985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.121951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.806 [2024-11-02 14:49:06.122863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.806 [2024-11-02 14:49:06.122879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.122901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.122917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.122939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.122955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.122976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.122992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.807 [2024-11-02 14:49:06.123381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.123972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.123988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.807 [2024-11-02 14:49:06.124274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.807 [2024-11-02 14:49:06.124299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.124586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.124616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.808 [2024-11-02 14:49:06.125879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.125971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.125993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.808 [2024-11-02 14:49:06.126663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.808 [2024-11-02 14:49:06.126680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.126971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.126988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.127917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.127948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.128985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.809 [2024-11-02 14:49:06.129001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.809 [2024-11-02 14:49:06.129023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.129978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.129994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.810 [2024-11-02 14:49:06.130151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.810 [2024-11-02 14:49:06.130660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.810 [2024-11-02 14:49:06.130677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.130974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.130996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.131360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.131376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.811 [2024-11-02 14:49:06.132712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.811 [2024-11-02 14:49:06.132876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.811 [2024-11-02 14:49:06.132891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.132912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.132927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.132948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.132983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.132997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.133977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.133998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.812 [2024-11-02 14:49:06.134478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.812 [2024-11-02 14:49:06.134508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.134530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.134566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.134581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.134601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.134637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.134652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.135966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.813 [2024-11-02 14:49:06.136727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.813 [2024-11-02 14:49:06.136743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.136798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.814 [2024-11-02 14:49:06.136836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.136879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.136954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.136976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.136992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.137478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.145956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.145984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.814 [2024-11-02 14:49:06.146723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:33.814 [2024-11-02 14:49:06.146750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.146766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.146816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.146834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.146860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.146902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.146943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.146959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.146985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.147000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.815 [2024-11-02 14:49:06.147041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.147961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.147986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.815 [2024-11-02 14:49:06.148522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.815 [2024-11-02 14:49:06.148538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.148967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.148997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.149014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.149038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.149053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.149079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.149094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.149119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.149135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:06.149330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:06.149352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.816 7796.31 IOPS, 30.45 MiB/s [2024-11-02T13:49:25.871Z] 7337.71 IOPS, 28.66 MiB/s [2024-11-02T13:49:25.871Z] 6930.06 IOPS, 27.07 MiB/s [2024-11-02T13:49:25.871Z] 6565.32 IOPS, 25.65 MiB/s [2024-11-02T13:49:25.871Z] 6589.70 IOPS, 25.74 MiB/s [2024-11-02T13:49:25.871Z] 6664.71 IOPS, 26.03 MiB/s [2024-11-02T13:49:25.871Z] 6761.64 IOPS, 26.41 MiB/s [2024-11-02T13:49:25.871Z] 6938.96 IOPS, 27.11 MiB/s [2024-11-02T13:49:25.871Z] 7102.79 IOPS, 27.75 MiB/s [2024-11-02T13:49:25.871Z] 7246.80 IOPS, 28.31 MiB/s [2024-11-02T13:49:25.871Z] 7291.00 IOPS, 28.48 MiB/s [2024-11-02T13:49:25.871Z] 7326.52 IOPS, 28.62 MiB/s [2024-11-02T13:49:25.871Z] 7358.54 IOPS, 28.74 MiB/s [2024-11-02T13:49:25.871Z] 7431.17 IOPS, 29.03 MiB/s [2024-11-02T13:49:25.871Z] 7539.53 IOPS, 29.45 MiB/s [2024-11-02T13:49:25.871Z] 7646.84 IOPS, 29.87 MiB/s [2024-11-02T13:49:25.871Z] [2024-11-02 14:49:22.760148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.816 [2024-11-02 14:49:22.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.816 [2024-11-02 14:49:22.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.816 [2024-11-02 14:49:22.760769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.816 [2024-11-02 14:49:22.760806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.816 [2024-11-02 14:49:22.760845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:33.816 [2024-11-02 14:49:22.760905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.816 [2024-11-02 14:49:22.760922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.760947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.760966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.760997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.761971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.761986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.817 [2024-11-02 14:49:22.762843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:33.817 [2024-11-02 14:49:22.762882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.817 [2024-11-02 14:49:22.762898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:33.817 7726.09 IOPS, 30.18 MiB/s [2024-11-02T13:49:25.872Z] 7740.33 IOPS, 30.24 MiB/s [2024-11-02T13:49:25.872Z] 7758.00 IOPS, 30.30 MiB/s [2024-11-02T13:49:25.872Z] Received shutdown signal, test time was about 34.465865 seconds 00:33:33.817 00:33:33.817 Latency(us) 00:33:33.817 [2024-11-02T13:49:25.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.817 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:33.817 Verification LBA range: start 0x0 length 0x4000 00:33:33.817 Nvme0n1 : 34.47 7761.73 30.32 0.00 0.00 16461.79 221.49 4101097.24 00:33:33.817 [2024-11-02T13:49:25.872Z] =================================================================================================================== 00:33:33.817 [2024-11-02T13:49:25.872Z] Total : 7761.73 30.32 0.00 0.00 16461.79 221.49 4101097.24 00:33:33.817 [2024-11-02 14:49:25.493997] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:33:33.817 14:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.111 rmmod nvme_tcp 00:33:34.111 rmmod nvme_fabrics 00:33:34.111 rmmod nvme_keyring 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 1501515 ']' 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 1501515 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1501515 ']' 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1501515 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501515 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501515' 00:33:34.111 killing process with pid 1501515 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1501515 00:33:34.111 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1501515 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.372 14:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.909 00:33:36.909 real 0m43.488s 00:33:36.909 user 2m5.511s 00:33:36.909 sys 0m13.714s 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:36.909 ************************************ 00:33:36.909 END TEST nvmf_host_multipath_status 00:33:36.909 ************************************ 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.909 ************************************ 00:33:36.909 START TEST nvmf_discovery_remove_ifc 00:33:36.909 ************************************ 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:36.909 * Looking for test storage... 00:33:36.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.909 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.910 --rc genhtml_branch_coverage=1 00:33:36.910 --rc genhtml_function_coverage=1 00:33:36.910 --rc genhtml_legend=1 00:33:36.910 --rc geninfo_all_blocks=1 00:33:36.910 --rc geninfo_unexecuted_blocks=1 00:33:36.910 00:33:36.910 ' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.910 --rc genhtml_branch_coverage=1 00:33:36.910 --rc genhtml_function_coverage=1 00:33:36.910 --rc genhtml_legend=1 00:33:36.910 --rc geninfo_all_blocks=1 00:33:36.910 --rc geninfo_unexecuted_blocks=1 00:33:36.910 00:33:36.910 ' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.910 --rc genhtml_branch_coverage=1 00:33:36.910 --rc genhtml_function_coverage=1 00:33:36.910 --rc genhtml_legend=1 00:33:36.910 --rc geninfo_all_blocks=1 00:33:36.910 --rc geninfo_unexecuted_blocks=1 00:33:36.910 00:33:36.910 ' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.910 --rc genhtml_branch_coverage=1 00:33:36.910 --rc genhtml_function_coverage=1 00:33:36.910 --rc genhtml_legend=1 00:33:36.910 --rc geninfo_all_blocks=1 00:33:36.910 --rc geninfo_unexecuted_blocks=1 00:33:36.910 00:33:36.910 ' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:36.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.910 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:38.814 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:33:38.815 00:33:38.815 --- 10.0.0.2 ping statistics --- 00:33:38.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.815 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:33:38.815 00:33:38.815 --- 10.0.0.1 ping statistics --- 00:33:38.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.815 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=1508255 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 1508255 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1508255 ']' 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.815 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.815 [2024-11-02 14:49:30.777604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:38.815 [2024-11-02 14:49:30.777695] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.815 [2024-11-02 14:49:30.848493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.074 [2024-11-02 14:49:30.938381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.074 [2024-11-02 14:49:30.938451] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.074 [2024-11-02 14:49:30.938468] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.074 [2024-11-02 14:49:30.938482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.074 [2024-11-02 14:49:30.938493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.074 [2024-11-02 14:49:30.938524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.074 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.074 [2024-11-02 14:49:31.083277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.074 [2024-11-02 14:49:31.091489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:39.074 null0 00:33:39.074 [2024-11-02 14:49:31.123402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1508274 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1508274 /tmp/host.sock 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1508274 ']' 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:39.334 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:39.334 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.334 [2024-11-02 14:49:31.191375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:39.334 [2024-11-02 14:49:31.191444] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508274 ] 00:33:39.334 [2024-11-02 14:49:31.253325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.334 [2024-11-02 14:49:31.345012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:39.592 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.593 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.593 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:39.593 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.593 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.527 [2024-11-02 14:49:32.541414] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:40.527 [2024-11-02 14:49:32.541455] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:40.527 [2024-11-02 14:49:32.541479] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:40.786 [2024-11-02 14:49:32.667903] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:41.044 [2024-11-02 14:49:32.854139] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:41.044 [2024-11-02 14:49:32.854232] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:41.044 [2024-11-02 14:49:32.854306] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:41.044 [2024-11-02 14:49:32.854329] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:41.044 [2024-11-02 14:49:32.854374] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.044 [2024-11-02 14:49:32.860114] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xebe250 was disconnected and freed. delete nvme_qpair. 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.044 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.979 14:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.979 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.979 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.979 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:43.354 14:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.288 14:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:45.223 14:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:46.158 14:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:46.416 [2024-11-02 14:49:38.295118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:46.416 [2024-11-02 14:49:38.295196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.416 [2024-11-02 14:49:38.295222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.416 [2024-11-02 14:49:38.295244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.416 [2024-11-02 14:49:38.295268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.416 [2024-11-02 14:49:38.295286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.416 [2024-11-02 14:49:38.295314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.416 [2024-11-02 14:49:38.295328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.416 [2024-11-02 14:49:38.295340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.416 [2024-11-02 14:49:38.295353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.416 [2024-11-02 14:49:38.295366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.416 [2024-11-02 14:49:38.295379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9ab00 is same with the state(6) to be set 00:33:46.416 [2024-11-02 14:49:38.305147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9ab00 (9): Bad file descriptor 00:33:46.416 [2024-11-02 14:49:38.315203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.351 [2024-11-02 14:49:39.327291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:47.351 [2024-11-02 14:49:39.327348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9ab00 with addr=10.0.0.2, port=4420 00:33:47.351 [2024-11-02 14:49:39.327375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9ab00 is same with the state(6) to be set 00:33:47.351 [2024-11-02 14:49:39.327428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9ab00 (9): Bad file descriptor 00:33:47.351 [2024-11-02 14:49:39.327882] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:47.351 [2024-11-02 14:49:39.327930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:47.351 [2024-11-02 14:49:39.327950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:47.351 [2024-11-02 14:49:39.327968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:47.351 [2024-11-02 14:49:39.327999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.351 [2024-11-02 14:49:39.328017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.351 14:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.286 [2024-11-02 14:49:40.330544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:48.286 [2024-11-02 14:49:40.330624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:48.286 [2024-11-02 14:49:40.330642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:48.286 [2024-11-02 14:49:40.330661] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:48.286 [2024-11-02 14:49:40.330699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.286 [2024-11-02 14:49:40.330741] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:48.286 [2024-11-02 14:49:40.330809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.286 [2024-11-02 14:49:40.330833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.286 [2024-11-02 14:49:40.330856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.286 [2024-11-02 14:49:40.330871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.286 [2024-11-02 14:49:40.330887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.286 [2024-11-02 14:49:40.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.286 [2024-11-02 14:49:40.330917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.286 [2024-11-02 14:49:40.330932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.286 [2024-11-02 14:49:40.330947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.286 [2024-11-02 14:49:40.330962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.286 [2024-11-02 14:49:40.330977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:48.286 [2024-11-02 14:49:40.331099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a210 (9): Bad file descriptor 00:33:48.286 [2024-11-02 14:49:40.332121] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:48.286 [2024-11-02 14:49:40.332158] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:48.544 14:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:49.478 14:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.412 [2024-11-02 14:49:42.387021] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:50.412 [2024-11-02 14:49:42.387046] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:50.412 [2024-11-02 14:49:42.387079] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.670 [2024-11-02 14:49:42.515524] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:50.670 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.670 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.670 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:50.671 14:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.929 [2024-11-02 14:49:42.740230] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:50.929 [2024-11-02 14:49:42.740307] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:50.929 [2024-11-02 14:49:42.740341] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:50.929 [2024-11-02 14:49:42.740363] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:50.929 [2024-11-02 14:49:42.740376] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.929 [2024-11-02 14:49:42.746169] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe96420 was disconnected and freed. delete nvme_qpair. 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1508274 ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508274' 00:33:51.873 killing process with pid 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1508274 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:51.873 rmmod nvme_tcp 00:33:51.873 rmmod nvme_fabrics 00:33:51.873 rmmod nvme_keyring 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 1508255 ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 1508255 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1508255 ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1508255 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.873 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508255 00:33:52.133 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:52.133 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:52.133 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508255' 00:33:52.133 killing process with pid 1508255 00:33:52.133 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1508255 00:33:52.133 14:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1508255 00:33:52.133 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:52.133 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:52.133 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:52.133 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:52.133 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.393 14:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.297 00:33:54.297 real 0m17.776s 00:33:54.297 user 0m25.904s 00:33:54.297 sys 0m2.949s 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.297 ************************************ 00:33:54.297 END TEST nvmf_discovery_remove_ifc 00:33:54.297 ************************************ 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.297 ************************************ 00:33:54.297 START TEST nvmf_identify_kernel_target 00:33:54.297 ************************************ 00:33:54.297 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:54.297 * Looking for test storage... 00:33:54.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:54.556 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:54.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.557 --rc genhtml_branch_coverage=1 00:33:54.557 --rc genhtml_function_coverage=1 00:33:54.557 --rc genhtml_legend=1 00:33:54.557 --rc geninfo_all_blocks=1 00:33:54.557 --rc geninfo_unexecuted_blocks=1 00:33:54.557 00:33:54.557 ' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:54.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.557 --rc genhtml_branch_coverage=1 00:33:54.557 --rc genhtml_function_coverage=1 00:33:54.557 --rc genhtml_legend=1 00:33:54.557 --rc geninfo_all_blocks=1 00:33:54.557 --rc geninfo_unexecuted_blocks=1 00:33:54.557 00:33:54.557 ' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:54.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.557 --rc genhtml_branch_coverage=1 00:33:54.557 --rc genhtml_function_coverage=1 00:33:54.557 --rc genhtml_legend=1 00:33:54.557 --rc geninfo_all_blocks=1 00:33:54.557 --rc geninfo_unexecuted_blocks=1 00:33:54.557 00:33:54.557 ' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:54.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.557 --rc genhtml_branch_coverage=1 00:33:54.557 --rc genhtml_function_coverage=1 00:33:54.557 --rc genhtml_legend=1 00:33:54.557 --rc geninfo_all_blocks=1 00:33:54.557 --rc geninfo_unexecuted_blocks=1 00:33:54.557 00:33:54.557 ' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:54.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.557 14:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:56.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:56.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:56.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:56.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.464 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:56.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:33:56.465 00:33:56.465 --- 10.0.0.2 ping statistics --- 00:33:56.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.465 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:33:56.465 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:33:56.724 00:33:56.724 --- 10.0.0.1 ping statistics --- 00:33:56.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.724 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:56.724 14:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:57.660 Waiting for block devices as requested 00:33:57.660 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:57.919 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:57.919 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:58.177 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:58.177 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:58.177 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:58.436 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:58.436 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:58.436 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:58.436 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:58.695 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:58.695 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:58.695 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:58.695 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:58.955 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:58.955 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:58.955 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:59.213 No valid GPT data, bailing 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:33:59.213 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:59.214 00:33:59.214 Discovery Log Number of Records 2, Generation counter 2 00:33:59.214 =====Discovery Log Entry 0====== 00:33:59.214 trtype: tcp 00:33:59.214 adrfam: ipv4 00:33:59.214 subtype: current discovery subsystem 00:33:59.214 treq: not specified, sq flow control disable supported 00:33:59.214 portid: 1 00:33:59.214 trsvcid: 4420 00:33:59.214 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:59.214 traddr: 10.0.0.1 00:33:59.214 eflags: none 00:33:59.214 sectype: none 00:33:59.214 =====Discovery Log Entry 1====== 00:33:59.214 trtype: tcp 00:33:59.214 adrfam: ipv4 00:33:59.214 subtype: nvme subsystem 00:33:59.214 treq: not specified, sq flow control disable supported 00:33:59.214 portid: 1 00:33:59.214 trsvcid: 4420 00:33:59.214 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:59.214 traddr: 10.0.0.1 00:33:59.214 eflags: none 00:33:59.214 sectype: none 00:33:59.214 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:59.214 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:59.474 ===================================================== 00:33:59.474 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:59.474 ===================================================== 00:33:59.474 Controller Capabilities/Features 00:33:59.474 ================================ 00:33:59.474 Vendor ID: 0000 00:33:59.474 Subsystem Vendor ID: 0000 00:33:59.474 Serial Number: 25d3a5d10df1c1997581 00:33:59.474 Model Number: Linux 00:33:59.474 Firmware Version: 6.8.9-20 00:33:59.474 Recommended Arb Burst: 0 00:33:59.474 IEEE OUI Identifier: 00 00 00 00:33:59.474 Multi-path I/O 00:33:59.474 May have multiple subsystem ports: No 00:33:59.474 May have multiple controllers: No 00:33:59.474 Associated with SR-IOV VF: No 00:33:59.474 Max Data Transfer Size: Unlimited 00:33:59.474 Max Number of Namespaces: 0 00:33:59.474 Max Number of I/O Queues: 1024 00:33:59.474 NVMe Specification Version (VS): 1.3 00:33:59.474 NVMe Specification Version (Identify): 1.3 00:33:59.474 Maximum Queue Entries: 1024 00:33:59.474 Contiguous Queues Required: No 00:33:59.474 Arbitration Mechanisms Supported 00:33:59.474 Weighted Round Robin: Not Supported 00:33:59.474 Vendor Specific: Not Supported 00:33:59.474 Reset Timeout: 7500 ms 00:33:59.474 Doorbell Stride: 4 bytes 00:33:59.474 NVM Subsystem Reset: Not Supported 00:33:59.474 Command Sets Supported 00:33:59.474 NVM Command Set: Supported 00:33:59.474 Boot Partition: Not Supported 00:33:59.474 Memory Page Size Minimum: 4096 bytes 00:33:59.474 Memory Page Size Maximum: 4096 bytes 00:33:59.474 Persistent Memory Region: Not Supported 00:33:59.474 Optional Asynchronous Events Supported 00:33:59.474 Namespace Attribute Notices: Not Supported 00:33:59.474 Firmware Activation Notices: Not Supported 00:33:59.474 ANA Change Notices: Not Supported 00:33:59.474 PLE Aggregate Log Change Notices: Not Supported 00:33:59.474 LBA Status Info Alert Notices: Not Supported 00:33:59.474 EGE Aggregate Log Change Notices: Not Supported 00:33:59.474 Normal NVM Subsystem Shutdown event: Not Supported 00:33:59.474 Zone Descriptor Change Notices: Not Supported 00:33:59.474 Discovery Log Change Notices: Supported 00:33:59.474 Controller Attributes 00:33:59.474 128-bit Host Identifier: Not Supported 00:33:59.474 Non-Operational Permissive Mode: Not Supported 00:33:59.474 NVM Sets: Not Supported 00:33:59.474 Read Recovery Levels: Not Supported 00:33:59.474 Endurance Groups: Not Supported 00:33:59.474 Predictable Latency Mode: Not Supported 00:33:59.474 Traffic Based Keep ALive: Not Supported 00:33:59.474 Namespace Granularity: Not Supported 00:33:59.474 SQ Associations: Not Supported 00:33:59.474 UUID List: Not Supported 00:33:59.474 Multi-Domain Subsystem: Not Supported 00:33:59.474 Fixed Capacity Management: Not Supported 00:33:59.474 Variable Capacity Management: Not Supported 00:33:59.474 Delete Endurance Group: Not Supported 00:33:59.475 Delete NVM Set: Not Supported 00:33:59.475 Extended LBA Formats Supported: Not Supported 00:33:59.475 Flexible Data Placement Supported: Not Supported 00:33:59.475 00:33:59.475 Controller Memory Buffer Support 00:33:59.475 ================================ 00:33:59.475 Supported: No 00:33:59.475 00:33:59.475 Persistent Memory Region Support 00:33:59.475 ================================ 00:33:59.475 Supported: No 00:33:59.475 00:33:59.475 Admin Command Set Attributes 00:33:59.475 ============================ 00:33:59.475 Security Send/Receive: Not Supported 00:33:59.475 Format NVM: Not Supported 00:33:59.475 Firmware Activate/Download: Not Supported 00:33:59.475 Namespace Management: Not Supported 00:33:59.475 Device Self-Test: Not Supported 00:33:59.475 Directives: Not Supported 00:33:59.475 NVMe-MI: Not Supported 00:33:59.475 Virtualization Management: Not Supported 00:33:59.475 Doorbell Buffer Config: Not Supported 00:33:59.475 Get LBA Status Capability: Not Supported 00:33:59.475 Command & Feature Lockdown Capability: Not Supported 00:33:59.475 Abort Command Limit: 1 00:33:59.475 Async Event Request Limit: 1 00:33:59.475 Number of Firmware Slots: N/A 00:33:59.475 Firmware Slot 1 Read-Only: N/A 00:33:59.475 Firmware Activation Without Reset: N/A 00:33:59.475 Multiple Update Detection Support: N/A 00:33:59.475 Firmware Update Granularity: No Information Provided 00:33:59.475 Per-Namespace SMART Log: No 00:33:59.475 Asymmetric Namespace Access Log Page: Not Supported 00:33:59.475 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:59.475 Command Effects Log Page: Not Supported 00:33:59.475 Get Log Page Extended Data: Supported 00:33:59.475 Telemetry Log Pages: Not Supported 00:33:59.475 Persistent Event Log Pages: Not Supported 00:33:59.475 Supported Log Pages Log Page: May Support 00:33:59.475 Commands Supported & Effects Log Page: Not Supported 00:33:59.475 Feature Identifiers & Effects Log Page:May Support 00:33:59.475 NVMe-MI Commands & Effects Log Page: May Support 00:33:59.475 Data Area 4 for Telemetry Log: Not Supported 00:33:59.475 Error Log Page Entries Supported: 1 00:33:59.475 Keep Alive: Not Supported 00:33:59.475 00:33:59.475 NVM Command Set Attributes 00:33:59.475 ========================== 00:33:59.475 Submission Queue Entry Size 00:33:59.475 Max: 1 00:33:59.475 Min: 1 00:33:59.475 Completion Queue Entry Size 00:33:59.475 Max: 1 00:33:59.475 Min: 1 00:33:59.475 Number of Namespaces: 0 00:33:59.475 Compare Command: Not Supported 00:33:59.475 Write Uncorrectable Command: Not Supported 00:33:59.475 Dataset Management Command: Not Supported 00:33:59.475 Write Zeroes Command: Not Supported 00:33:59.475 Set Features Save Field: Not Supported 00:33:59.475 Reservations: Not Supported 00:33:59.475 Timestamp: Not Supported 00:33:59.475 Copy: Not Supported 00:33:59.475 Volatile Write Cache: Not Present 00:33:59.475 Atomic Write Unit (Normal): 1 00:33:59.475 Atomic Write Unit (PFail): 1 00:33:59.475 Atomic Compare & Write Unit: 1 00:33:59.475 Fused Compare & Write: Not Supported 00:33:59.475 Scatter-Gather List 00:33:59.475 SGL Command Set: Supported 00:33:59.475 SGL Keyed: Not Supported 00:33:59.475 SGL Bit Bucket Descriptor: Not Supported 00:33:59.475 SGL Metadata Pointer: Not Supported 00:33:59.475 Oversized SGL: Not Supported 00:33:59.475 SGL Metadata Address: Not Supported 00:33:59.475 SGL Offset: Supported 00:33:59.475 Transport SGL Data Block: Not Supported 00:33:59.475 Replay Protected Memory Block: Not Supported 00:33:59.475 00:33:59.475 Firmware Slot Information 00:33:59.475 ========================= 00:33:59.475 Active slot: 0 00:33:59.475 00:33:59.475 00:33:59.475 Error Log 00:33:59.475 ========= 00:33:59.475 00:33:59.475 Active Namespaces 00:33:59.475 ================= 00:33:59.475 Discovery Log Page 00:33:59.475 ================== 00:33:59.475 Generation Counter: 2 00:33:59.475 Number of Records: 2 00:33:59.475 Record Format: 0 00:33:59.475 00:33:59.475 Discovery Log Entry 0 00:33:59.475 ---------------------- 00:33:59.475 Transport Type: 3 (TCP) 00:33:59.475 Address Family: 1 (IPv4) 00:33:59.475 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:59.475 Entry Flags: 00:33:59.475 Duplicate Returned Information: 0 00:33:59.475 Explicit Persistent Connection Support for Discovery: 0 00:33:59.475 Transport Requirements: 00:33:59.475 Secure Channel: Not Specified 00:33:59.475 Port ID: 1 (0x0001) 00:33:59.475 Controller ID: 65535 (0xffff) 00:33:59.475 Admin Max SQ Size: 32 00:33:59.475 Transport Service Identifier: 4420 00:33:59.475 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:59.475 Transport Address: 10.0.0.1 00:33:59.475 Discovery Log Entry 1 00:33:59.475 ---------------------- 00:33:59.475 Transport Type: 3 (TCP) 00:33:59.475 Address Family: 1 (IPv4) 00:33:59.475 Subsystem Type: 2 (NVM Subsystem) 00:33:59.475 Entry Flags: 00:33:59.475 Duplicate Returned Information: 0 00:33:59.475 Explicit Persistent Connection Support for Discovery: 0 00:33:59.475 Transport Requirements: 00:33:59.475 Secure Channel: Not Specified 00:33:59.475 Port ID: 1 (0x0001) 00:33:59.475 Controller ID: 65535 (0xffff) 00:33:59.475 Admin Max SQ Size: 32 00:33:59.475 Transport Service Identifier: 4420 00:33:59.475 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:59.475 Transport Address: 10.0.0.1 00:33:59.475 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:59.475 get_feature(0x01) failed 00:33:59.475 get_feature(0x02) failed 00:33:59.475 get_feature(0x04) failed 00:33:59.475 ===================================================== 00:33:59.475 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:59.475 ===================================================== 00:33:59.475 Controller Capabilities/Features 00:33:59.475 ================================ 00:33:59.475 Vendor ID: 0000 00:33:59.475 Subsystem Vendor ID: 0000 00:33:59.475 Serial Number: 397119996b91393917d5 00:33:59.475 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:59.475 Firmware Version: 6.8.9-20 00:33:59.475 Recommended Arb Burst: 6 00:33:59.475 IEEE OUI Identifier: 00 00 00 00:33:59.475 Multi-path I/O 00:33:59.475 May have multiple subsystem ports: Yes 00:33:59.475 May have multiple controllers: Yes 00:33:59.475 Associated with SR-IOV VF: No 00:33:59.475 Max Data Transfer Size: Unlimited 00:33:59.475 Max Number of Namespaces: 1024 00:33:59.475 Max Number of I/O Queues: 128 00:33:59.475 NVMe Specification Version (VS): 1.3 00:33:59.475 NVMe Specification Version (Identify): 1.3 00:33:59.475 Maximum Queue Entries: 1024 00:33:59.475 Contiguous Queues Required: No 00:33:59.475 Arbitration Mechanisms Supported 00:33:59.475 Weighted Round Robin: Not Supported 00:33:59.475 Vendor Specific: Not Supported 00:33:59.475 Reset Timeout: 7500 ms 00:33:59.475 Doorbell Stride: 4 bytes 00:33:59.475 NVM Subsystem Reset: Not Supported 00:33:59.475 Command Sets Supported 00:33:59.475 NVM Command Set: Supported 00:33:59.475 Boot Partition: Not Supported 00:33:59.475 Memory Page Size Minimum: 4096 bytes 00:33:59.475 Memory Page Size Maximum: 4096 bytes 00:33:59.475 Persistent Memory Region: Not Supported 00:33:59.475 Optional Asynchronous Events Supported 00:33:59.475 Namespace Attribute Notices: Supported 00:33:59.475 Firmware Activation Notices: Not Supported 00:33:59.475 ANA Change Notices: Supported 00:33:59.475 PLE Aggregate Log Change Notices: Not Supported 00:33:59.475 LBA Status Info Alert Notices: Not Supported 00:33:59.475 EGE Aggregate Log Change Notices: Not Supported 00:33:59.475 Normal NVM Subsystem Shutdown event: Not Supported 00:33:59.475 Zone Descriptor Change Notices: Not Supported 00:33:59.475 Discovery Log Change Notices: Not Supported 00:33:59.475 Controller Attributes 00:33:59.475 128-bit Host Identifier: Supported 00:33:59.475 Non-Operational Permissive Mode: Not Supported 00:33:59.475 NVM Sets: Not Supported 00:33:59.475 Read Recovery Levels: Not Supported 00:33:59.475 Endurance Groups: Not Supported 00:33:59.475 Predictable Latency Mode: Not Supported 00:33:59.475 Traffic Based Keep ALive: Supported 00:33:59.475 Namespace Granularity: Not Supported 00:33:59.475 SQ Associations: Not Supported 00:33:59.475 UUID List: Not Supported 00:33:59.475 Multi-Domain Subsystem: Not Supported 00:33:59.475 Fixed Capacity Management: Not Supported 00:33:59.475 Variable Capacity Management: Not Supported 00:33:59.475 Delete Endurance Group: Not Supported 00:33:59.475 Delete NVM Set: Not Supported 00:33:59.475 Extended LBA Formats Supported: Not Supported 00:33:59.475 Flexible Data Placement Supported: Not Supported 00:33:59.475 00:33:59.475 Controller Memory Buffer Support 00:33:59.475 ================================ 00:33:59.475 Supported: No 00:33:59.475 00:33:59.475 Persistent Memory Region Support 00:33:59.475 ================================ 00:33:59.475 Supported: No 00:33:59.475 00:33:59.476 Admin Command Set Attributes 00:33:59.476 ============================ 00:33:59.476 Security Send/Receive: Not Supported 00:33:59.476 Format NVM: Not Supported 00:33:59.476 Firmware Activate/Download: Not Supported 00:33:59.476 Namespace Management: Not Supported 00:33:59.476 Device Self-Test: Not Supported 00:33:59.476 Directives: Not Supported 00:33:59.476 NVMe-MI: Not Supported 00:33:59.476 Virtualization Management: Not Supported 00:33:59.476 Doorbell Buffer Config: Not Supported 00:33:59.476 Get LBA Status Capability: Not Supported 00:33:59.476 Command & Feature Lockdown Capability: Not Supported 00:33:59.476 Abort Command Limit: 4 00:33:59.476 Async Event Request Limit: 4 00:33:59.476 Number of Firmware Slots: N/A 00:33:59.476 Firmware Slot 1 Read-Only: N/A 00:33:59.476 Firmware Activation Without Reset: N/A 00:33:59.476 Multiple Update Detection Support: N/A 00:33:59.476 Firmware Update Granularity: No Information Provided 00:33:59.476 Per-Namespace SMART Log: Yes 00:33:59.476 Asymmetric Namespace Access Log Page: Supported 00:33:59.476 ANA Transition Time : 10 sec 00:33:59.476 00:33:59.476 Asymmetric Namespace Access Capabilities 00:33:59.476 ANA Optimized State : Supported 00:33:59.476 ANA Non-Optimized State : Supported 00:33:59.476 ANA Inaccessible State : Supported 00:33:59.476 ANA Persistent Loss State : Supported 00:33:59.476 ANA Change State : Supported 00:33:59.476 ANAGRPID is not changed : No 00:33:59.476 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:59.476 00:33:59.476 ANA Group Identifier Maximum : 128 00:33:59.476 Number of ANA Group Identifiers : 128 00:33:59.476 Max Number of Allowed Namespaces : 1024 00:33:59.476 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:59.476 Command Effects Log Page: Supported 00:33:59.476 Get Log Page Extended Data: Supported 00:33:59.476 Telemetry Log Pages: Not Supported 00:33:59.476 Persistent Event Log Pages: Not Supported 00:33:59.476 Supported Log Pages Log Page: May Support 00:33:59.476 Commands Supported & Effects Log Page: Not Supported 00:33:59.476 Feature Identifiers & Effects Log Page:May Support 00:33:59.476 NVMe-MI Commands & Effects Log Page: May Support 00:33:59.476 Data Area 4 for Telemetry Log: Not Supported 00:33:59.476 Error Log Page Entries Supported: 128 00:33:59.476 Keep Alive: Supported 00:33:59.476 Keep Alive Granularity: 1000 ms 00:33:59.476 00:33:59.476 NVM Command Set Attributes 00:33:59.476 ========================== 00:33:59.476 Submission Queue Entry Size 00:33:59.476 Max: 64 00:33:59.476 Min: 64 00:33:59.476 Completion Queue Entry Size 00:33:59.476 Max: 16 00:33:59.476 Min: 16 00:33:59.476 Number of Namespaces: 1024 00:33:59.476 Compare Command: Not Supported 00:33:59.476 Write Uncorrectable Command: Not Supported 00:33:59.476 Dataset Management Command: Supported 00:33:59.476 Write Zeroes Command: Supported 00:33:59.476 Set Features Save Field: Not Supported 00:33:59.476 Reservations: Not Supported 00:33:59.476 Timestamp: Not Supported 00:33:59.476 Copy: Not Supported 00:33:59.476 Volatile Write Cache: Present 00:33:59.476 Atomic Write Unit (Normal): 1 00:33:59.476 Atomic Write Unit (PFail): 1 00:33:59.476 Atomic Compare & Write Unit: 1 00:33:59.476 Fused Compare & Write: Not Supported 00:33:59.476 Scatter-Gather List 00:33:59.476 SGL Command Set: Supported 00:33:59.476 SGL Keyed: Not Supported 00:33:59.476 SGL Bit Bucket Descriptor: Not Supported 00:33:59.476 SGL Metadata Pointer: Not Supported 00:33:59.476 Oversized SGL: Not Supported 00:33:59.476 SGL Metadata Address: Not Supported 00:33:59.476 SGL Offset: Supported 00:33:59.476 Transport SGL Data Block: Not Supported 00:33:59.476 Replay Protected Memory Block: Not Supported 00:33:59.476 00:33:59.476 Firmware Slot Information 00:33:59.476 ========================= 00:33:59.476 Active slot: 0 00:33:59.476 00:33:59.476 Asymmetric Namespace Access 00:33:59.476 =========================== 00:33:59.476 Change Count : 0 00:33:59.476 Number of ANA Group Descriptors : 1 00:33:59.476 ANA Group Descriptor : 0 00:33:59.476 ANA Group ID : 1 00:33:59.476 Number of NSID Values : 1 00:33:59.476 Change Count : 0 00:33:59.476 ANA State : 1 00:33:59.476 Namespace Identifier : 1 00:33:59.476 00:33:59.476 Commands Supported and Effects 00:33:59.476 ============================== 00:33:59.476 Admin Commands 00:33:59.476 -------------- 00:33:59.476 Get Log Page (02h): Supported 00:33:59.476 Identify (06h): Supported 00:33:59.476 Abort (08h): Supported 00:33:59.476 Set Features (09h): Supported 00:33:59.476 Get Features (0Ah): Supported 00:33:59.476 Asynchronous Event Request (0Ch): Supported 00:33:59.476 Keep Alive (18h): Supported 00:33:59.476 I/O Commands 00:33:59.476 ------------ 00:33:59.476 Flush (00h): Supported 00:33:59.476 Write (01h): Supported LBA-Change 00:33:59.476 Read (02h): Supported 00:33:59.476 Write Zeroes (08h): Supported LBA-Change 00:33:59.476 Dataset Management (09h): Supported 00:33:59.476 00:33:59.476 Error Log 00:33:59.476 ========= 00:33:59.476 Entry: 0 00:33:59.476 Error Count: 0x3 00:33:59.476 Submission Queue Id: 0x0 00:33:59.476 Command Id: 0x5 00:33:59.476 Phase Bit: 0 00:33:59.476 Status Code: 0x2 00:33:59.476 Status Code Type: 0x0 00:33:59.476 Do Not Retry: 1 00:33:59.476 Error Location: 0x28 00:33:59.476 LBA: 0x0 00:33:59.476 Namespace: 0x0 00:33:59.476 Vendor Log Page: 0x0 00:33:59.476 ----------- 00:33:59.476 Entry: 1 00:33:59.476 Error Count: 0x2 00:33:59.476 Submission Queue Id: 0x0 00:33:59.476 Command Id: 0x5 00:33:59.476 Phase Bit: 0 00:33:59.476 Status Code: 0x2 00:33:59.476 Status Code Type: 0x0 00:33:59.476 Do Not Retry: 1 00:33:59.476 Error Location: 0x28 00:33:59.476 LBA: 0x0 00:33:59.476 Namespace: 0x0 00:33:59.476 Vendor Log Page: 0x0 00:33:59.476 ----------- 00:33:59.476 Entry: 2 00:33:59.476 Error Count: 0x1 00:33:59.476 Submission Queue Id: 0x0 00:33:59.476 Command Id: 0x4 00:33:59.476 Phase Bit: 0 00:33:59.476 Status Code: 0x2 00:33:59.476 Status Code Type: 0x0 00:33:59.476 Do Not Retry: 1 00:33:59.476 Error Location: 0x28 00:33:59.476 LBA: 0x0 00:33:59.476 Namespace: 0x0 00:33:59.476 Vendor Log Page: 0x0 00:33:59.476 00:33:59.476 Number of Queues 00:33:59.476 ================ 00:33:59.476 Number of I/O Submission Queues: 128 00:33:59.476 Number of I/O Completion Queues: 128 00:33:59.476 00:33:59.476 ZNS Specific Controller Data 00:33:59.476 ============================ 00:33:59.476 Zone Append Size Limit: 0 00:33:59.476 00:33:59.476 00:33:59.476 Active Namespaces 00:33:59.476 ================= 00:33:59.476 get_feature(0x05) failed 00:33:59.476 Namespace ID:1 00:33:59.476 Command Set Identifier: NVM (00h) 00:33:59.476 Deallocate: Supported 00:33:59.476 Deallocated/Unwritten Error: Not Supported 00:33:59.476 Deallocated Read Value: Unknown 00:33:59.476 Deallocate in Write Zeroes: Not Supported 00:33:59.476 Deallocated Guard Field: 0xFFFF 00:33:59.476 Flush: Supported 00:33:59.476 Reservation: Not Supported 00:33:59.476 Namespace Sharing Capabilities: Multiple Controllers 00:33:59.476 Size (in LBAs): 1953525168 (931GiB) 00:33:59.476 Capacity (in LBAs): 1953525168 (931GiB) 00:33:59.476 Utilization (in LBAs): 1953525168 (931GiB) 00:33:59.476 UUID: b2f99b5a-75ad-42c7-b1fb-b00493e72db1 00:33:59.476 Thin Provisioning: Not Supported 00:33:59.476 Per-NS Atomic Units: Yes 00:33:59.476 Atomic Boundary Size (Normal): 0 00:33:59.476 Atomic Boundary Size (PFail): 0 00:33:59.476 Atomic Boundary Offset: 0 00:33:59.476 NGUID/EUI64 Never Reused: No 00:33:59.476 ANA group ID: 1 00:33:59.476 Namespace Write Protected: No 00:33:59.476 Number of LBA Formats: 1 00:33:59.476 Current LBA Format: LBA Format #00 00:33:59.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:59.476 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.476 rmmod nvme_tcp 00:33:59.476 rmmod nvme_fabrics 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:59.476 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:59.477 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:33:59.735 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.735 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.735 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.735 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.735 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:01.640 14:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:03.016 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:03.016 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:03.016 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:03.953 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:03.953 00:34:03.953 real 0m9.580s 00:34:03.953 user 0m2.066s 00:34:03.953 sys 0m3.485s 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.953 ************************************ 00:34:03.953 END TEST nvmf_identify_kernel_target 00:34:03.953 ************************************ 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.953 ************************************ 00:34:03.953 START TEST nvmf_auth_host 00:34:03.953 ************************************ 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:03.953 * Looking for test storage... 00:34:03.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:34:03.953 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.212 --rc genhtml_branch_coverage=1 00:34:04.212 --rc genhtml_function_coverage=1 00:34:04.212 --rc genhtml_legend=1 00:34:04.212 --rc geninfo_all_blocks=1 00:34:04.212 --rc geninfo_unexecuted_blocks=1 00:34:04.212 00:34:04.212 ' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.212 --rc genhtml_branch_coverage=1 00:34:04.212 --rc genhtml_function_coverage=1 00:34:04.212 --rc genhtml_legend=1 00:34:04.212 --rc geninfo_all_blocks=1 00:34:04.212 --rc geninfo_unexecuted_blocks=1 00:34:04.212 00:34:04.212 ' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.212 --rc genhtml_branch_coverage=1 00:34:04.212 --rc genhtml_function_coverage=1 00:34:04.212 --rc genhtml_legend=1 00:34:04.212 --rc geninfo_all_blocks=1 00:34:04.212 --rc geninfo_unexecuted_blocks=1 00:34:04.212 00:34:04.212 ' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.212 --rc genhtml_branch_coverage=1 00:34:04.212 --rc genhtml_function_coverage=1 00:34:04.212 --rc genhtml_legend=1 00:34:04.212 --rc geninfo_all_blocks=1 00:34:04.212 --rc geninfo_unexecuted_blocks=1 00:34:04.212 00:34:04.212 ' 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.212 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.213 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:06.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.114 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:06.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:06.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:06.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.115 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.115 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.115 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.115 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.115 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.373 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.373 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.373 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.373 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:34:06.373 00:34:06.373 --- 10.0.0.2 ping statistics --- 00:34:06.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.374 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:34:06.374 00:34:06.374 --- 10.0.0.1 ping statistics --- 00:34:06.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.374 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=1515476 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 1515476 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1515476 ']' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.374 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2efb283859038ff342145aceb7603e34 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.sHw 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2efb283859038ff342145aceb7603e34 0 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2efb283859038ff342145aceb7603e34 0 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2efb283859038ff342145aceb7603e34 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.sHw 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.sHw 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.sHw 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=58bc091e6722db1fd7aa9d9f0a55e287e58871e0946feab6df2394f1512a17bf 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.IfA 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 58bc091e6722db1fd7aa9d9f0a55e287e58871e0946feab6df2394f1512a17bf 3 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 58bc091e6722db1fd7aa9d9f0a55e287e58871e0946feab6df2394f1512a17bf 3 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=58bc091e6722db1fd7aa9d9f0a55e287e58871e0946feab6df2394f1512a17bf 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:06.633 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.IfA 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.IfA 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.IfA 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=af8090cd979ce3ac31a5b25848f74a25f619eff03fd6c3eb 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.OUy 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key af8090cd979ce3ac31a5b25848f74a25f619eff03fd6c3eb 0 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 af8090cd979ce3ac31a5b25848f74a25f619eff03fd6c3eb 0 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=af8090cd979ce3ac31a5b25848f74a25f619eff03fd6c3eb 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.OUy 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.OUy 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OUy 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5b9d540bd485b14726f2bdc1ef381ef6e6bfc238ef4f66d3 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.imV 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5b9d540bd485b14726f2bdc1ef381ef6e6bfc238ef4f66d3 2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5b9d540bd485b14726f2bdc1ef381ef6e6bfc238ef4f66d3 2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5b9d540bd485b14726f2bdc1ef381ef6e6bfc238ef4f66d3 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.imV 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.imV 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.imV 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=64145957bb66a163fb4e79a849acdbce 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.R8I 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 64145957bb66a163fb4e79a849acdbce 1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 64145957bb66a163fb4e79a849acdbce 1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=64145957bb66a163fb4e79a849acdbce 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.R8I 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.R8I 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.R8I 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=56cf9349e382516117f3557276694985 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.auq 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 56cf9349e382516117f3557276694985 1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 56cf9349e382516117f3557276694985 1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=56cf9349e382516117f3557276694985 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.auq 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.auq 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.auq 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c267a5bafa6cd2d8a7ed1560a0960e79e0ab221a06bceae9 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.kGv 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c267a5bafa6cd2d8a7ed1560a0960e79e0ab221a06bceae9 2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c267a5bafa6cd2d8a7ed1560a0960e79e0ab221a06bceae9 2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c267a5bafa6cd2d8a7ed1560a0960e79e0ab221a06bceae9 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:06.894 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.kGv 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.kGv 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kGv 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1ef3f5f00f3710cd9dff5b6db2a76c85 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.f0J 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1ef3f5f00f3710cd9dff5b6db2a76c85 0 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1ef3f5f00f3710cd9dff5b6db2a76c85 0 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1ef3f5f00f3710cd9dff5b6db2a76c85 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:07.184 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.f0J 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.f0J 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.f0J 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:07.184 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=cd2514b3043b8db441f36b6b26e829d3dfb2a00eee0a80bf3bac72482759a14d 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.EfI 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key cd2514b3043b8db441f36b6b26e829d3dfb2a00eee0a80bf3bac72482759a14d 3 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 cd2514b3043b8db441f36b6b26e829d3dfb2a00eee0a80bf3bac72482759a14d 3 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=cd2514b3043b8db441f36b6b26e829d3dfb2a00eee0a80bf3bac72482759a14d 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.EfI 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.EfI 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.EfI 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1515476 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1515476 ']' 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:07.185 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sHw 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.IfA ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IfA 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OUy 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.imV ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.imV 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.R8I 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.auq ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.auq 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kGv 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.f0J ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.f0J 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.EfI 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:34:07.467 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:34:07.468 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:34:07.468 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:07.468 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:08.850 Waiting for block devices as requested 00:34:08.850 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:08.850 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:08.850 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:09.108 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:09.108 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:09.108 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:09.108 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:09.370 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:09.370 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:09.370 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:09.370 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:09.627 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:09.628 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:09.628 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:09.628 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:09.885 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:09.885 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:10.451 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:10.451 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:10.451 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:10.452 No valid GPT data, bailing 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:10.452 00:34:10.452 Discovery Log Number of Records 2, Generation counter 2 00:34:10.452 =====Discovery Log Entry 0====== 00:34:10.452 trtype: tcp 00:34:10.452 adrfam: ipv4 00:34:10.452 subtype: current discovery subsystem 00:34:10.452 treq: not specified, sq flow control disable supported 00:34:10.452 portid: 1 00:34:10.452 trsvcid: 4420 00:34:10.452 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:10.452 traddr: 10.0.0.1 00:34:10.452 eflags: none 00:34:10.452 sectype: none 00:34:10.452 =====Discovery Log Entry 1====== 00:34:10.452 trtype: tcp 00:34:10.452 adrfam: ipv4 00:34:10.452 subtype: nvme subsystem 00:34:10.452 treq: not specified, sq flow control disable supported 00:34:10.452 portid: 1 00:34:10.452 trsvcid: 4420 00:34:10.452 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:10.452 traddr: 10.0.0.1 00:34:10.452 eflags: none 00:34:10.452 sectype: none 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.452 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.711 nvme0n1 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.711 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.970 nvme0n1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.970 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 nvme0n1 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 nvme0n1 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.487 nvme0n1 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.487 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 nvme0n1 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.746 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.004 nvme0n1 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.004 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.004 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.262 nvme0n1 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.262 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.521 nvme0n1 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.521 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.779 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.780 nvme0n1 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.780 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.039 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.039 nvme0n1 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.039 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.605 nvme0n1 00:34:13.605 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.606 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.864 nvme0n1 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.864 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.865 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.123 nvme0n1 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.123 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.124 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.382 nvme0n1 00:34:14.382 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.640 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 nvme0n1 00:34:14.898 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.898 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.898 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.898 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.898 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.899 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.464 nvme0n1 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.464 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.465 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.030 nvme0n1 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.030 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.288 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.854 nvme0n1 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.854 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.420 nvme0n1 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.420 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.421 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.987 nvme0n1 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.987 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.988 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.922 nvme0n1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.922 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.296 nvme0n1 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.296 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.296 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.230 nvme0n1 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.230 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.230 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.165 nvme0n1 00:34:22.165 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.166 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 nvme0n1 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.100 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 nvme0n1 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.100 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 nvme0n1 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.359 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 nvme0n1 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.618 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.619 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 nvme0n1 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.878 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.136 nvme0n1 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:24.136 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.137 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.395 nvme0n1 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:24.395 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.396 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.654 nvme0n1 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.654 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.655 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.913 nvme0n1 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.913 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.914 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.172 nvme0n1 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.172 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.430 nvme0n1 00:34:25.430 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.430 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.430 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.431 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.689 nvme0n1 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.689 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.948 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 nvme0n1 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.206 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.464 nvme0n1 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.464 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.465 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.465 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.465 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.465 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.031 nvme0n1 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.031 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.289 nvme0n1 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.290 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.856 nvme0n1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.856 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.423 nvme0n1 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.423 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.990 nvme0n1 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.990 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.990 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 nvme0n1 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.556 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.814 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.381 nvme0n1 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.381 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.314 nvme0n1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.314 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.247 nvme0n1 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.247 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.248 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.506 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.440 nvme0n1 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.440 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.374 nvme0n1 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.374 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.308 nvme0n1 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.308 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.309 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.309 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.309 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 nvme0n1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 nvme0n1 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.567 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.825 nvme0n1 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.825 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.084 nvme0n1 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.084 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.343 nvme0n1 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.343 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.669 nvme0n1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.669 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.954 nvme0n1 00:34:36.954 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.954 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.954 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.954 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.954 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.955 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.213 nvme0n1 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.213 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.214 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.471 nvme0n1 00:34:37.471 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.471 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.472 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.739 nvme0n1 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.739 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.004 nvme0n1 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.004 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.004 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.005 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.571 nvme0n1 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.571 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.829 nvme0n1 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.829 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.830 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.088 nvme0n1 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.088 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.654 nvme0n1 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.654 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.220 nvme0n1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.220 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.786 nvme0n1 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:40.786 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.787 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.353 nvme0n1 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.353 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.354 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.920 nvme0n1 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.920 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.485 nvme0n1 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.485 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVmYjI4Mzg1OTAzOGZmMzQyMTQ1YWNlYjc2MDNlMzQ4ZPwh: 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NThiYzA5MWU2NzIyZGIxZmQ3YWE5ZDlmMGE1NWUyODdlNTg4NzFlMDk0NmZlYWI2ZGYyMzk0ZjE1MTJhMTdiZluOxZ0=: 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.486 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.419 nvme0n1 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.419 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.420 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.352 nvme0n1 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.352 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:44.610 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.611 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 nvme0n1 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzI2N2E1YmFmYTZjZDJkOGE3ZWQxNTYwYTA5NjBlNzllMGFiMjIxYTA2YmNlYWU5kVhFHQ==: 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWVmM2Y1ZjAwZjM3MTBjZDlkZmY1YjZkYjJhNzZjODW8m85E: 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.546 14:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.479 nvme0n1 00:34:46.479 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.479 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.479 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.479 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.479 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2QyNTE0YjMwNDNiOGRiNDQxZjM2YjZiMjZlODI5ZDNkZmIyYTAwZWVlMGE4MGJmM2JhYzcyNDgyNzU5YTE0ZIhBOlE=: 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.737 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.674 nvme0n1 00:34:47.674 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.674 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.674 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:47.675 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.676 request: 00:34:47.676 { 00:34:47.676 "name": "nvme0", 00:34:47.676 "trtype": "tcp", 00:34:47.676 "traddr": "10.0.0.1", 00:34:47.676 "adrfam": "ipv4", 00:34:47.676 "trsvcid": "4420", 00:34:47.676 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.676 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.676 "prchk_reftag": false, 00:34:47.676 "prchk_guard": false, 00:34:47.676 "hdgst": false, 00:34:47.676 "ddgst": false, 00:34:47.676 "allow_unrecognized_csi": false, 00:34:47.676 "method": "bdev_nvme_attach_controller", 00:34:47.676 "req_id": 1 00:34:47.676 } 00:34:47.676 Got JSON-RPC error response 00:34:47.676 response: 00:34:47.676 { 00:34:47.676 "code": -5, 00:34:47.676 "message": "Input/output error" 00:34:47.676 } 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.676 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.939 request: 00:34:47.939 { 00:34:47.939 "name": "nvme0", 00:34:47.939 "trtype": "tcp", 00:34:47.939 "traddr": "10.0.0.1", 00:34:47.939 "adrfam": "ipv4", 00:34:47.939 "trsvcid": "4420", 00:34:47.939 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.939 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.939 "prchk_reftag": false, 00:34:47.939 "prchk_guard": false, 00:34:47.939 "hdgst": false, 00:34:47.939 "ddgst": false, 00:34:47.939 "dhchap_key": "key2", 00:34:47.939 "allow_unrecognized_csi": false, 00:34:47.939 "method": "bdev_nvme_attach_controller", 00:34:47.939 "req_id": 1 00:34:47.939 } 00:34:47.939 Got JSON-RPC error response 00:34:47.939 response: 00:34:47.939 { 00:34:47.939 "code": -5, 00:34:47.939 "message": "Input/output error" 00:34:47.939 } 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.939 request: 00:34:47.939 { 00:34:47.939 "name": "nvme0", 00:34:47.939 "trtype": "tcp", 00:34:47.939 "traddr": "10.0.0.1", 00:34:47.939 "adrfam": "ipv4", 00:34:47.939 "trsvcid": "4420", 00:34:47.939 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.939 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.939 "prchk_reftag": false, 00:34:47.939 "prchk_guard": false, 00:34:47.939 "hdgst": false, 00:34:47.939 "ddgst": false, 00:34:47.939 "dhchap_key": "key1", 00:34:47.939 "dhchap_ctrlr_key": "ckey2", 00:34:47.939 "allow_unrecognized_csi": false, 00:34:47.939 "method": "bdev_nvme_attach_controller", 00:34:47.939 "req_id": 1 00:34:47.939 } 00:34:47.939 Got JSON-RPC error response 00:34:47.939 response: 00:34:47.939 { 00:34:47.939 "code": -5, 00:34:47.939 "message": "Input/output error" 00:34:47.939 } 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:47.939 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.940 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.197 nvme0n1 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.197 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.455 request: 00:34:48.455 { 00:34:48.455 "name": "nvme0", 00:34:48.455 "dhchap_key": "key1", 00:34:48.455 "dhchap_ctrlr_key": "ckey2", 00:34:48.455 "method": "bdev_nvme_set_keys", 00:34:48.455 "req_id": 1 00:34:48.455 } 00:34:48.455 Got JSON-RPC error response 00:34:48.455 response: 00:34:48.455 { 00:34:48.455 "code": -13, 00:34:48.455 "message": "Permission denied" 00:34:48.455 } 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:48.455 14:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:49.389 14:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:50.762 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.762 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:50.762 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY4MDkwY2Q5NzljZTNhYzMxYTViMjU4NDhmNzRhMjVmNjE5ZWZmMDNmZDZjM2Vi1heXag==: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWI5ZDU0MGJkNDg1YjE0NzI2ZjJiZGMxZWYzODFlZjZlNmJmYzIzOGVmNGY2NmQzbu/lJQ==: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.763 nvme0n1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQxNDU5NTdiYjY2YTE2M2ZiNGU3OWE4NDlhY2RiY2XQKdyc: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTZjZjkzNDllMzgyNTE2MTE3ZjM1NTcyNzY2OTQ5ODUdbFBA: 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.763 request: 00:34:50.763 { 00:34:50.763 "name": "nvme0", 00:34:50.763 "dhchap_key": "key2", 00:34:50.763 "dhchap_ctrlr_key": "ckey1", 00:34:50.763 "method": "bdev_nvme_set_keys", 00:34:50.763 "req_id": 1 00:34:50.763 } 00:34:50.763 Got JSON-RPC error response 00:34:50.763 response: 00:34:50.763 { 00:34:50.763 "code": -13, 00:34:50.763 "message": "Permission denied" 00:34:50.763 } 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:50.763 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.696 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.696 rmmod nvme_tcp 00:34:51.954 rmmod nvme_fabrics 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 1515476 ']' 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 1515476 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1515476 ']' 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1515476 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1515476 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1515476' 00:34:51.954 killing process with pid 1515476 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1515476 00:34:51.954 14:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1515476 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.213 14:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:54.113 14:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.490 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:55.490 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:55.490 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:56.427 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:56.427 14:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.sHw /tmp/spdk.key-null.OUy /tmp/spdk.key-sha256.R8I /tmp/spdk.key-sha384.kGv /tmp/spdk.key-sha512.EfI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:56.427 14:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.802 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:57.802 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:57.802 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:57.802 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:57.802 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:57.802 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:57.802 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:57.802 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:57.802 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:57.802 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:57.802 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:57.802 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:57.802 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:57.802 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:57.802 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:57.802 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:57.802 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:57.802 00:34:57.802 real 0m53.714s 00:34:57.802 user 0m51.077s 00:34:57.802 sys 0m5.866s 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.802 ************************************ 00:34:57.802 END TEST nvmf_auth_host 00:34:57.802 ************************************ 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.802 ************************************ 00:34:57.802 START TEST nvmf_digest 00:34:57.802 ************************************ 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:57.802 * Looking for test storage... 00:34:57.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:57.802 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:57.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.803 --rc genhtml_branch_coverage=1 00:34:57.803 --rc genhtml_function_coverage=1 00:34:57.803 --rc genhtml_legend=1 00:34:57.803 --rc geninfo_all_blocks=1 00:34:57.803 --rc geninfo_unexecuted_blocks=1 00:34:57.803 00:34:57.803 ' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:57.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.803 --rc genhtml_branch_coverage=1 00:34:57.803 --rc genhtml_function_coverage=1 00:34:57.803 --rc genhtml_legend=1 00:34:57.803 --rc geninfo_all_blocks=1 00:34:57.803 --rc geninfo_unexecuted_blocks=1 00:34:57.803 00:34:57.803 ' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:57.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.803 --rc genhtml_branch_coverage=1 00:34:57.803 --rc genhtml_function_coverage=1 00:34:57.803 --rc genhtml_legend=1 00:34:57.803 --rc geninfo_all_blocks=1 00:34:57.803 --rc geninfo_unexecuted_blocks=1 00:34:57.803 00:34:57.803 ' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:57.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.803 --rc genhtml_branch_coverage=1 00:34:57.803 --rc genhtml_function_coverage=1 00:34:57.803 --rc genhtml_legend=1 00:34:57.803 --rc geninfo_all_blocks=1 00:34:57.803 --rc geninfo_unexecuted_blocks=1 00:34:57.803 00:34:57.803 ' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:57.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.803 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.061 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:58.062 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:58.062 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:58.062 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:59.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:59.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:59.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:59.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.964 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.965 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.965 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:00.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:00.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:35:00.223 00:35:00.223 --- 10.0.0.2 ping statistics --- 00:35:00.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.223 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:00.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:00.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:00.223 00:35:00.223 --- 10.0.0.1 ping statistics --- 00:35:00.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.223 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.223 ************************************ 00:35:00.223 START TEST nvmf_digest_clean 00:35:00.223 ************************************ 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=1525383 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 1525383 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1525383 ']' 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.223 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.224 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.224 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.224 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.224 [2024-11-02 14:50:52.223085] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:00.224 [2024-11-02 14:50:52.223157] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.483 [2024-11-02 14:50:52.288247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.483 [2024-11-02 14:50:52.370728] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.483 [2024-11-02 14:50:52.370783] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.483 [2024-11-02 14:50:52.370806] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.483 [2024-11-02 14:50:52.370818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.483 [2024-11-02 14:50:52.370827] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.483 [2024-11-02 14:50:52.370859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.483 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.741 null0 00:35:00.741 [2024-11-02 14:50:52.580375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.741 [2024-11-02 14:50:52.604620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1525409 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1525409 /var/tmp/bperf.sock 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1525409 ']' 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.741 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.741 [2024-11-02 14:50:52.655721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:00.741 [2024-11-02 14:50:52.655807] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525409 ] 00:35:00.741 [2024-11-02 14:50:52.714299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.999 [2024-11-02 14:50:52.801836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.999 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.999 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:00.999 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:00.999 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:00.999 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:01.257 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.257 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.822 nvme0n1 00:35:01.822 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:01.822 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.822 Running I/O for 2 seconds... 00:35:04.130 18324.00 IOPS, 71.58 MiB/s [2024-11-02T13:50:56.185Z] 18605.00 IOPS, 72.68 MiB/s 00:35:04.130 Latency(us) 00:35:04.130 [2024-11-02T13:50:56.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.130 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:04.130 nvme0n1 : 2.01 18617.66 72.73 0.00 0.00 6866.79 3422.44 16117.00 00:35:04.130 [2024-11-02T13:50:56.185Z] =================================================================================================================== 00:35:04.130 [2024-11-02T13:50:56.185Z] Total : 18617.66 72.73 0.00 0.00 6866.79 3422.44 16117.00 00:35:04.130 { 00:35:04.130 "results": [ 00:35:04.130 { 00:35:04.130 "job": "nvme0n1", 00:35:04.130 "core_mask": "0x2", 00:35:04.130 "workload": "randread", 00:35:04.130 "status": "finished", 00:35:04.130 "queue_depth": 128, 00:35:04.130 "io_size": 4096, 00:35:04.130 "runtime": 2.005515, 00:35:04.130 "iops": 18617.661797593137, 00:35:04.130 "mibps": 72.72524139684819, 00:35:04.130 "io_failed": 0, 00:35:04.130 "io_timeout": 0, 00:35:04.130 "avg_latency_us": 6866.787497911968, 00:35:04.130 "min_latency_us": 3422.4355555555558, 00:35:04.130 "max_latency_us": 16117.001481481482 00:35:04.130 } 00:35:04.130 ], 00:35:04.130 "core_count": 1 00:35:04.130 } 00:35:04.130 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:04.130 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:04.130 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:04.130 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:04.130 | select(.opcode=="crc32c") 00:35:04.130 | "\(.module_name) \(.executed)"' 00:35:04.130 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1525409 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1525409 ']' 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1525409 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:04.130 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1525409 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1525409' 00:35:04.131 killing process with pid 1525409 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1525409 00:35:04.131 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.131 00:35:04.131 Latency(us) 00:35:04.131 [2024-11-02T13:50:56.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.131 [2024-11-02T13:50:56.186Z] =================================================================================================================== 00:35:04.131 [2024-11-02T13:50:56.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.131 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1525409 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1525898 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1525898 /var/tmp/bperf.sock 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1525898 ']' 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:04.389 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.389 [2024-11-02 14:50:56.390193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:04.389 [2024-11-02 14:50:56.390296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525898 ] 00:35:04.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.389 Zero copy mechanism will not be used. 00:35:04.647 [2024-11-02 14:50:56.452507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.647 [2024-11-02 14:50:56.542744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.647 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.647 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:04.647 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:04.647 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:04.647 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:05.213 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.213 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.471 nvme0n1 00:35:05.471 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:05.471 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.471 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.471 Zero copy mechanism will not be used. 00:35:05.471 Running I/O for 2 seconds... 00:35:07.779 2828.00 IOPS, 353.50 MiB/s [2024-11-02T13:50:59.834Z] 2860.50 IOPS, 357.56 MiB/s 00:35:07.779 Latency(us) 00:35:07.779 [2024-11-02T13:50:59.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:07.779 nvme0n1 : 2.01 2861.20 357.65 0.00 0.00 5587.66 1353.20 16893.72 00:35:07.779 [2024-11-02T13:50:59.834Z] =================================================================================================================== 00:35:07.779 [2024-11-02T13:50:59.834Z] Total : 2861.20 357.65 0.00 0.00 5587.66 1353.20 16893.72 00:35:07.779 { 00:35:07.779 "results": [ 00:35:07.779 { 00:35:07.779 "job": "nvme0n1", 00:35:07.779 "core_mask": "0x2", 00:35:07.779 "workload": "randread", 00:35:07.779 "status": "finished", 00:35:07.779 "queue_depth": 16, 00:35:07.779 "io_size": 131072, 00:35:07.779 "runtime": 2.005106, 00:35:07.779 "iops": 2861.195368224922, 00:35:07.779 "mibps": 357.6494210281152, 00:35:07.779 "io_failed": 0, 00:35:07.779 "io_timeout": 0, 00:35:07.779 "avg_latency_us": 5587.664942962833, 00:35:07.779 "min_latency_us": 1353.197037037037, 00:35:07.779 "max_latency_us": 16893.724444444444 00:35:07.779 } 00:35:07.779 ], 00:35:07.779 "core_count": 1 00:35:07.779 } 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:07.779 | select(.opcode=="crc32c") 00:35:07.779 | "\(.module_name) \(.executed)"' 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1525898 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1525898 ']' 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1525898 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1525898 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1525898' 00:35:07.779 killing process with pid 1525898 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1525898 00:35:07.779 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.779 00:35:07.779 Latency(us) 00:35:07.779 [2024-11-02T13:50:59.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.779 [2024-11-02T13:50:59.834Z] =================================================================================================================== 00:35:07.779 [2024-11-02T13:50:59.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.779 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1525898 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1526336 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1526336 /var/tmp/bperf.sock 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1526336 ']' 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.036 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.036 [2024-11-02 14:51:00.033984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:08.036 [2024-11-02 14:51:00.034067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526336 ] 00:35:08.294 [2024-11-02 14:51:00.098253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.294 [2024-11-02 14:51:00.193156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.294 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.294 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:08.294 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:08.294 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:08.294 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:08.862 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.862 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.166 nvme0n1 00:35:09.166 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:09.166 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.450 Running I/O for 2 seconds... 00:35:11.317 19913.00 IOPS, 77.79 MiB/s [2024-11-02T13:51:03.372Z] 20163.50 IOPS, 78.76 MiB/s 00:35:11.317 Latency(us) 00:35:11.317 [2024-11-02T13:51:03.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.317 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:11.317 nvme0n1 : 2.00 20151.75 78.72 0.00 0.00 6341.91 3276.80 13592.65 00:35:11.317 [2024-11-02T13:51:03.372Z] =================================================================================================================== 00:35:11.317 [2024-11-02T13:51:03.372Z] Total : 20151.75 78.72 0.00 0.00 6341.91 3276.80 13592.65 00:35:11.317 { 00:35:11.317 "results": [ 00:35:11.317 { 00:35:11.317 "job": "nvme0n1", 00:35:11.317 "core_mask": "0x2", 00:35:11.317 "workload": "randwrite", 00:35:11.317 "status": "finished", 00:35:11.317 "queue_depth": 128, 00:35:11.317 "io_size": 4096, 00:35:11.317 "runtime": 2.004392, 00:35:11.317 "iops": 20151.746764106025, 00:35:11.317 "mibps": 78.71776079728916, 00:35:11.317 "io_failed": 0, 00:35:11.317 "io_timeout": 0, 00:35:11.317 "avg_latency_us": 6341.914292379129, 00:35:11.317 "min_latency_us": 3276.8, 00:35:11.317 "max_latency_us": 13592.651851851851 00:35:11.317 } 00:35:11.317 ], 00:35:11.317 "core_count": 1 00:35:11.317 } 00:35:11.317 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:11.317 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:11.317 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:11.317 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:11.317 | select(.opcode=="crc32c") 00:35:11.317 | "\(.module_name) \(.executed)"' 00:35:11.317 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1526336 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1526336 ']' 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1526336 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526336 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526336' 00:35:11.576 killing process with pid 1526336 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1526336 00:35:11.576 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.576 00:35:11.576 Latency(us) 00:35:11.576 [2024-11-02T13:51:03.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.576 [2024-11-02T13:51:03.631Z] =================================================================================================================== 00:35:11.576 [2024-11-02T13:51:03.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.576 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1526336 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1526866 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1526866 /var/tmp/bperf.sock 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1526866 ']' 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:11.834 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.834 [2024-11-02 14:51:03.870198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:11.834 [2024-11-02 14:51:03.870318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526866 ] 00:35:11.834 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.834 Zero copy mechanism will not be used. 00:35:12.093 [2024-11-02 14:51:03.932116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.093 [2024-11-02 14:51:04.021929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.093 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.093 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:12.093 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.093 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.093 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:12.659 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.660 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.917 nvme0n1 00:35:12.918 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:12.918 14:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.918 Zero copy mechanism will not be used. 00:35:12.918 Running I/O for 2 seconds... 00:35:15.222 3071.00 IOPS, 383.88 MiB/s [2024-11-02T13:51:07.277Z] 3167.50 IOPS, 395.94 MiB/s 00:35:15.222 Latency(us) 00:35:15.222 [2024-11-02T13:51:07.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.222 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:15.222 nvme0n1 : 2.01 3165.74 395.72 0.00 0.00 5043.01 3495.25 16019.91 00:35:15.222 [2024-11-02T13:51:07.277Z] =================================================================================================================== 00:35:15.222 [2024-11-02T13:51:07.277Z] Total : 3165.74 395.72 0.00 0.00 5043.01 3495.25 16019.91 00:35:15.222 { 00:35:15.222 "results": [ 00:35:15.222 { 00:35:15.222 "job": "nvme0n1", 00:35:15.222 "core_mask": "0x2", 00:35:15.222 "workload": "randwrite", 00:35:15.222 "status": "finished", 00:35:15.222 "queue_depth": 16, 00:35:15.222 "io_size": 131072, 00:35:15.222 "runtime": 2.007111, 00:35:15.222 "iops": 3165.744196509311, 00:35:15.222 "mibps": 395.7180245636639, 00:35:15.222 "io_failed": 0, 00:35:15.222 "io_timeout": 0, 00:35:15.222 "avg_latency_us": 5043.009295981535, 00:35:15.222 "min_latency_us": 3495.2533333333336, 00:35:15.222 "max_latency_us": 16019.91111111111 00:35:15.222 } 00:35:15.222 ], 00:35:15.222 "core_count": 1 00:35:15.222 } 00:35:15.222 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:15.222 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:15.222 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:15.222 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:15.222 | select(.opcode=="crc32c") 00:35:15.222 | "\(.module_name) \(.executed)"' 00:35:15.222 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1526866 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1526866 ']' 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1526866 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526866 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526866' 00:35:15.222 killing process with pid 1526866 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1526866 00:35:15.222 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.222 00:35:15.222 Latency(us) 00:35:15.222 [2024-11-02T13:51:07.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.222 [2024-11-02T13:51:07.277Z] =================================================================================================================== 00:35:15.222 [2024-11-02T13:51:07.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.222 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1526866 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1525383 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1525383 ']' 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1525383 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1525383 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1525383' 00:35:15.480 killing process with pid 1525383 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1525383 00:35:15.480 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1525383 00:35:15.738 00:35:15.738 real 0m15.583s 00:35:15.738 user 0m31.130s 00:35:15.738 sys 0m4.222s 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.738 ************************************ 00:35:15.738 END TEST nvmf_digest_clean 00:35:15.738 ************************************ 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:15.738 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.996 ************************************ 00:35:15.996 START TEST nvmf_digest_error 00:35:15.996 ************************************ 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=1527710 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 1527710 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1527710 ']' 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:15.996 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.996 [2024-11-02 14:51:07.846578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:15.996 [2024-11-02 14:51:07.846666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.996 [2024-11-02 14:51:07.912058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.996 [2024-11-02 14:51:07.998351] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.996 [2024-11-02 14:51:07.998421] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.996 [2024-11-02 14:51:07.998435] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.996 [2024-11-02 14:51:07.998447] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.996 [2024-11-02 14:51:07.998456] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.996 [2024-11-02 14:51:07.998483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.254 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:16.254 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:16.254 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.255 [2024-11-02 14:51:08.107148] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.255 null0 00:35:16.255 [2024-11-02 14:51:08.219789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.255 [2024-11-02 14:51:08.244026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1527798 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1527798 /var/tmp/bperf.sock 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1527798 ']' 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:16.255 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.255 [2024-11-02 14:51:08.295415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:16.255 [2024-11-02 14:51:08.295489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527798 ] 00:35:16.513 [2024-11-02 14:51:08.357934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.513 [2024-11-02 14:51:08.452088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.513 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:16.513 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:16.513 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.513 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:17.078 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:17.079 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.079 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.079 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.079 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.079 14:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.336 nvme0n1 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:17.336 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.594 Running I/O for 2 seconds... 00:35:17.594 [2024-11-02 14:51:09.502701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.502760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.502782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.517443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.517479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.517498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.529789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.529841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.529860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.546868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.546916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.546934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.558961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.558997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.573120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.573154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.573173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.591708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.591741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.591759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.607315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.607347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.619184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.619216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.619233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.634485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.634517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.634536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.594 [2024-11-02 14:51:09.647278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.594 [2024-11-02 14:51:09.647319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.594 [2024-11-02 14:51:09.647337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.659360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.659392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.659410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.673127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.673159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.673177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.688992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.689029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.689048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.701670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.701716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.701734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.715799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.715831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.715848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.731385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.731417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.731434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.743556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.743590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.743608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.758597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.758628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.758646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.770930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.770966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.770985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.785448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.785479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.785497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.800607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.800639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.800657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.812725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.812756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.812774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.829341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.829391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.845063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.845095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.845113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.857279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.857310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.853 [2024-11-02 14:51:09.857328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.853 [2024-11-02 14:51:09.870022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.853 [2024-11-02 14:51:09.870053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.854 [2024-11-02 14:51:09.870069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.854 [2024-11-02 14:51:09.885444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.854 [2024-11-02 14:51:09.885476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.854 [2024-11-02 14:51:09.885501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.854 [2024-11-02 14:51:09.899019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:17.854 [2024-11-02 14:51:09.899055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.854 [2024-11-02 14:51:09.899075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.911053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.911104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.926732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.926769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.941424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.941460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.941481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.958001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.958036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.958056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.970130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.970186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.984061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.984091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.984108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:09.996794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:09.996829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:09.996849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.112 [2024-11-02 14:51:10.010679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.112 [2024-11-02 14:51:10.010763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.112 [2024-11-02 14:51:10.010797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.030356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.030408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.030429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.047500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.047536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.047555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.062307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.062341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.062359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.074490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.074527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.074547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.090804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.090837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.090855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.102989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.103025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.103045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.118803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.118837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.118855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.134246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.134285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.134304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.148470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.113 [2024-11-02 14:51:10.160180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.113 [2024-11-02 14:51:10.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.113 [2024-11-02 14:51:10.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.371 [2024-11-02 14:51:10.175569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.175601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.190452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.190485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.190502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.201834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.201867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.201884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.217572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.217604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.217622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.233626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.233658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.233675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.244842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.244874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.244892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.259253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.259297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.259316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.270708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.270755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.270776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.287089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.287125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.287145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.302560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.302593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.302610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.314950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.314982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.315000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.329111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.329147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.329166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.343744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.343792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.343813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.356119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.356155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.356189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.368874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.368906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.368923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.382899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.382930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.382948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.394949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.410010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.410057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.372 [2024-11-02 14:51:10.425550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.372 [2024-11-02 14:51:10.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.372 [2024-11-02 14:51:10.425616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.439913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.439960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.439981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.453739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.453788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.453808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.467274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.467306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.467324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.479723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.479755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.479773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 17918.00 IOPS, 69.99 MiB/s [2024-11-02T13:51:10.686Z] [2024-11-02 14:51:10.497268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.497300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.497325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.513448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.513481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.524542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.524591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.524610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.540187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.540223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.540243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.554390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.554426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.567924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.567974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.580040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.580075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.580094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.593821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.593853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.593870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.609672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.609704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.609736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.621438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.621478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.621496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.635368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.635401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.635420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.648032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.648081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.648098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.660461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.660495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.660513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.631 [2024-11-02 14:51:10.676176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.631 [2024-11-02 14:51:10.676210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.631 [2024-11-02 14:51:10.676230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.687301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.687365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.702058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.702088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.702105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.714429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.714461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.714479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.730755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.730793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.730813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.746265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.746325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.758845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.758880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.758900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.773282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.773318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.773338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.787941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.787974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.787992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.801666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.801703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.801722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.817425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.817457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.817474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.829674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.890 [2024-11-02 14:51:10.829708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.890 [2024-11-02 14:51:10.829726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.890 [2024-11-02 14:51:10.843919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.843950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.843968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.855574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.855614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.855632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.870687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.870723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.870742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.885424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.885456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.885473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.897294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.897334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.897351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.911509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.911540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.911573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.928178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.928209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.928226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.891 [2024-11-02 14:51:10.939943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:18.891 [2024-11-02 14:51:10.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.891 [2024-11-02 14:51:10.940009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:10.957968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:10.958003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:10.958023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:10.971218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:10.971249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:10.971277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:10.982762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:10.982791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:10.982807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:10.998686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:10.998736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:10.998756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.014285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.014317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.014335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.025993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.026027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.026047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.039941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.039978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.039997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.055021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.055058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.055078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.067871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.067904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.067922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.149 [2024-11-02 14:51:11.080112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.149 [2024-11-02 14:51:11.080148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.149 [2024-11-02 14:51:11.080167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.094724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.094761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.094788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.108961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.108994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.109012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.121761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.121797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.121816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.135198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.135230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.135272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.147562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.147590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.162441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.162470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.162486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.176118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.176150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.176167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.187921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.187957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.187977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.150 [2024-11-02 14:51:11.200101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.150 [2024-11-02 14:51:11.200131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.150 [2024-11-02 14:51:11.200148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.214980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.215017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.215034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.227481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.227511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.227541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.243093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.243128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.243147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.258952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.258984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.259001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.274740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.274772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.274789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.290823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.290856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.290874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.302312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.302357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.302373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.317949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.317978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.317994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.330730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.330759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.330775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.343736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.343782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.343798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.357571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.357602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.357620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.374390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.374422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.374440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.385974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.386011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.402299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.402335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.402354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.415846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.415878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.408 [2024-11-02 14:51:11.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.408 [2024-11-02 14:51:11.428902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.408 [2024-11-02 14:51:11.428939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.409 [2024-11-02 14:51:11.428959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.409 [2024-11-02 14:51:11.444838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.409 [2024-11-02 14:51:11.444871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.409 [2024-11-02 14:51:11.444888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.409 [2024-11-02 14:51:11.456449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.409 [2024-11-02 14:51:11.456478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.409 [2024-11-02 14:51:11.456500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.666 [2024-11-02 14:51:11.472206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.666 [2024-11-02 14:51:11.472254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.666 [2024-11-02 14:51:11.472283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.666 18097.00 IOPS, 70.69 MiB/s [2024-11-02T13:51:11.721Z] [2024-11-02 14:51:11.489190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1285b10) 00:35:19.666 [2024-11-02 14:51:11.489225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.666 [2024-11-02 14:51:11.489244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.666 00:35:19.666 Latency(us) 00:35:19.666 [2024-11-02T13:51:11.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.666 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:19.666 nvme0n1 : 2.00 18118.78 70.78 0.00 0.00 7055.66 3470.98 21262.79 00:35:19.666 [2024-11-02T13:51:11.721Z] =================================================================================================================== 00:35:19.666 [2024-11-02T13:51:11.721Z] Total : 18118.78 70.78 0.00 0.00 7055.66 3470.98 21262.79 00:35:19.666 { 00:35:19.666 "results": [ 00:35:19.666 { 00:35:19.666 "job": "nvme0n1", 00:35:19.666 "core_mask": "0x2", 00:35:19.666 "workload": "randread", 00:35:19.666 "status": "finished", 00:35:19.666 "queue_depth": 128, 00:35:19.666 "io_size": 4096, 00:35:19.666 "runtime": 2.00466, 00:35:19.666 "iops": 18118.783235062307, 00:35:19.666 "mibps": 70.77649701196214, 00:35:19.666 "io_failed": 0, 00:35:19.667 "io_timeout": 0, 00:35:19.667 "avg_latency_us": 7055.658817286533, 00:35:19.667 "min_latency_us": 3470.9807407407407, 00:35:19.667 "max_latency_us": 21262.79111111111 00:35:19.667 } 00:35:19.667 ], 00:35:19.667 "core_count": 1 00:35:19.667 } 00:35:19.667 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:19.667 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:19.667 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:19.667 | .driver_specific 00:35:19.667 | .nvme_error 00:35:19.667 | .status_code 00:35:19.667 | .command_transient_transport_error' 00:35:19.667 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1527798 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1527798 ']' 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1527798 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527798 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527798' 00:35:19.924 killing process with pid 1527798 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1527798 00:35:19.924 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.924 00:35:19.924 Latency(us) 00:35:19.924 [2024-11-02T13:51:11.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.924 [2024-11-02T13:51:11.979Z] =================================================================================================================== 00:35:19.924 [2024-11-02T13:51:11.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.924 14:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1527798 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1528360 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1528360 /var/tmp/bperf.sock 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1528360 ']' 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:20.181 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.181 [2024-11-02 14:51:12.095900] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:20.181 [2024-11-02 14:51:12.095995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528360 ] 00:35:20.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:20.181 Zero copy mechanism will not be used. 00:35:20.181 [2024-11-02 14:51:12.154502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.439 [2024-11-02 14:51:12.239913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.439 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.439 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:20.439 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.439 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.696 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.262 nvme0n1 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.262 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:21.262 Zero copy mechanism will not be used. 00:35:21.262 Running I/O for 2 seconds... 00:35:21.262 [2024-11-02 14:51:13.261700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.261767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.261791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.262 [2024-11-02 14:51:13.271007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.271047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.271068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.262 [2024-11-02 14:51:13.280322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.280354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.280372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.262 [2024-11-02 14:51:13.289508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.289557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.289578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.262 [2024-11-02 14:51:13.298723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.298761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.298783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.262 [2024-11-02 14:51:13.307824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.262 [2024-11-02 14:51:13.307863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.262 [2024-11-02 14:51:13.307884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.317219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.317268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.317306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.326525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.326556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.326590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.335753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.335791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.335812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.345048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.354287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.354334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.354351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.363444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.363475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.363492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.521 [2024-11-02 14:51:13.372677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.521 [2024-11-02 14:51:13.372713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.521 [2024-11-02 14:51:13.372733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.381869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.381906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.381933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.391099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.391138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.391159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.400286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.400340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.400357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.409597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.409635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.409657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.418819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.418856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.418877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.428253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.428312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.428329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.437507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.437539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.437556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.446756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.446792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.446813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.455929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.455967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.455988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.465274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.465323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.465339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.474400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.474431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.474448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.483770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.483807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.483827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.492982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.493020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.502329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.502361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.502378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.511490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.511523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.511557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.520766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.520804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.520825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.529991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.530028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.530049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.539303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.539342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.539366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.548504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.548535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.548552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.557743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.557780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.557800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.522 [2024-11-02 14:51:13.567027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.522 [2024-11-02 14:51:13.567063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.522 [2024-11-02 14:51:13.567084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.576444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.576475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.576492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.585708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.585766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.595019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.595057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.595077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.604348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.604378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.604395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.613658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.613696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.613717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.622817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.622861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.622883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.632164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.632201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.632222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.641429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.641460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.641477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.650579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.650627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.650647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.659802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.659839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.659859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.669009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.669045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.669066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.678340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.678370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.678387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.687670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.687708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.687729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.696865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.696902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.696923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.706048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.706085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.706105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.715321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.715353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.715370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.724482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.724513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.724545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.733748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.733784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.733806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.743047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.743085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.743105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.752220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.752267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.761449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.761480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.761497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.770657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.770694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.779827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.779892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.789196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.789233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.789253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.798477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.781 [2024-11-02 14:51:13.798509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.781 [2024-11-02 14:51:13.798527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.781 [2024-11-02 14:51:13.807704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.782 [2024-11-02 14:51:13.807742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.782 [2024-11-02 14:51:13.807762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.782 [2024-11-02 14:51:13.816862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.782 [2024-11-02 14:51:13.816900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.782 [2024-11-02 14:51:13.816921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.782 [2024-11-02 14:51:13.826253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:21.782 [2024-11-02 14:51:13.826325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.782 [2024-11-02 14:51:13.826342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.836159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.836198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.836218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.845342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.845373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.845396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.854578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.854621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.854638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.863823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.863861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.863882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.873006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.873043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.873063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.882314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.882345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.882362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.891430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.891461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.891478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.900702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.900739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.900759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.909866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.909903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.040 [2024-11-02 14:51:13.919583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.040 [2024-11-02 14:51:13.919633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.040 [2024-11-02 14:51:13.919653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.930085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.930124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.930145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.941008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.941046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.941073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.951425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.951456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.951473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.961656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.961694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.961715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.971885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.971922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.971943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.982721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.982759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.982779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:13.993267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:13.993316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:13.993334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.003810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.003847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.003867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.013957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.013995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.014016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.024377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.024408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.024425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.034345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.034384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.034403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.043820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.043870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.043888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.053276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.053343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.063525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.063573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.063591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.073418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.073467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.083113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.083151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.083173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.041 [2024-11-02 14:51:14.093233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.041 [2024-11-02 14:51:14.093281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.041 [2024-11-02 14:51:14.093317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.103403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.103450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.103468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.113198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.113235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.113267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.123008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.123046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.123066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.132859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.132918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.142945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.142983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.152743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.152801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.162664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.162701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.162722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.172788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.172827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.182426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.182457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.182474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.192344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.192376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.192392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.202216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.202263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.211981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.212019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.212040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.221852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.221890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.221911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.231428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.231458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.231475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.240997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.241035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.241056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 3242.00 IOPS, 405.25 MiB/s [2024-11-02T13:51:14.355Z] [2024-11-02 14:51:14.252415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.252448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.252482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.261972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.262044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.271480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.271511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.271529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.281207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.281265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.281303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.290175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.290212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.290242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.300426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.300479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.300497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.310026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.310063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.310084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.319370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.319402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.319436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.328582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.300 [2024-11-02 14:51:14.328620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.300 [2024-11-02 14:51:14.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.300 [2024-11-02 14:51:14.337926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.301 [2024-11-02 14:51:14.337964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.301 [2024-11-02 14:51:14.337984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.301 [2024-11-02 14:51:14.347097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.301 [2024-11-02 14:51:14.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.301 [2024-11-02 14:51:14.347154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.356519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.356571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.356589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.365995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.366032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.366058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.375574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.375612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.375632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.385466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.385496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.385515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.395518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.395567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.395585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.405410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.405442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.405460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.415011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.415048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.415068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.424307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.424354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.424372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.433599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.433636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.433659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.559 [2024-11-02 14:51:14.442962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.559 [2024-11-02 14:51:14.442998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.559 [2024-11-02 14:51:14.443023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.452348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.452385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.452403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.461610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.461647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.461668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.471086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.471123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.471147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.480869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.480907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.480935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.490848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.490886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.490907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.500951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.501010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.510166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.510204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.510227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.519416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.519463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.519479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.528693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.528731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.528762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.537815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.537846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.537880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.547111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.547149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.547180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.556386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.556420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.556438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.565667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.565705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.565725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.575159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.575216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.585334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.585382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.595207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.595301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.560 [2024-11-02 14:51:14.604828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.560 [2024-11-02 14:51:14.604867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.560 [2024-11-02 14:51:14.604893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.818 [2024-11-02 14:51:14.614398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.818 [2024-11-02 14:51:14.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.818 [2024-11-02 14:51:14.614469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.818 [2024-11-02 14:51:14.623705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.818 [2024-11-02 14:51:14.623744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.818 [2024-11-02 14:51:14.623775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.818 [2024-11-02 14:51:14.633015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.633054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.633074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.641881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.641919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.641940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.651121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.651158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.651179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.660545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.670430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.670478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.670496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.679875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.679913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.679938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.689657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.689719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.699725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.699763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.699783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.708960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.708997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.709017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.718291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.718341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.718360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.727541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.727608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.736892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.736930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.736951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.746088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.746125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.755370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.755402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.755435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.765054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.765091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.765111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.775828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.775892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.785558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.785609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.785630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.795267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.795317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.795335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.805058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.805095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.805117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.814852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.814890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.814911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.824639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.824683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.824717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.834718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.819 [2024-11-02 14:51:14.834756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.819 [2024-11-02 14:51:14.834787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.819 [2024-11-02 14:51:14.844475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.820 [2024-11-02 14:51:14.844524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.820 [2024-11-02 14:51:14.844542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.820 [2024-11-02 14:51:14.854235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.820 [2024-11-02 14:51:14.854290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.820 [2024-11-02 14:51:14.854323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.820 [2024-11-02 14:51:14.863712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.820 [2024-11-02 14:51:14.863752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.820 [2024-11-02 14:51:14.863787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.820 [2024-11-02 14:51:14.872615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:22.820 [2024-11-02 14:51:14.872649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.820 [2024-11-02 14:51:14.872668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.078 [2024-11-02 14:51:14.882830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.078 [2024-11-02 14:51:14.882867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.078 [2024-11-02 14:51:14.882888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.078 [2024-11-02 14:51:14.892760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.078 [2024-11-02 14:51:14.892798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.078 [2024-11-02 14:51:14.892820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.078 [2024-11-02 14:51:14.903167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.078 [2024-11-02 14:51:14.903204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.078 [2024-11-02 14:51:14.903224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.078 [2024-11-02 14:51:14.912922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.078 [2024-11-02 14:51:14.912960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.078 [2024-11-02 14:51:14.912981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.078 [2024-11-02 14:51:14.923145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.078 [2024-11-02 14:51:14.923183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.923203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.932777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.932815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.932835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.942849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.942887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.942911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.953651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.953688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.963314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.963346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.963379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.973166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.973203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.973223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.982775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.982812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.982832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:14.993106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:14.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:14.993165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.002907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.002964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.012733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.012802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.022480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.022513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.022546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.032643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.032681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.032709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.042534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.042565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.042599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.052411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.052458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.062502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.062551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.062572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.072737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.072796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.082470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.082502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.092243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.092289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.092310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.102392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.102425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.102445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.112312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.112371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.122438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.122492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.122510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.079 [2024-11-02 14:51:15.131735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.079 [2024-11-02 14:51:15.131773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.079 [2024-11-02 14:51:15.131793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.141182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.141219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.141239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.150375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.150426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.150444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.159681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.159718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.159744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.168957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.168993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.169013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.178112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.178148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.178168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.187346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.187401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.187418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.196622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.196659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.196679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.205848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.205884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.205904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.215163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.215200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.224529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.224579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.224602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.338 [2024-11-02 14:51:15.233841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.338 [2024-11-02 14:51:15.233879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-11-02 14:51:15.233900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.339 [2024-11-02 14:51:15.243177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.339 [2024-11-02 14:51:15.243214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.339 [2024-11-02 14:51:15.243234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.339 [2024-11-02 14:51:15.252508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f41390) 00:35:23.339 [2024-11-02 14:51:15.252559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.339 [2024-11-02 14:51:15.252577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.339 3242.00 IOPS, 405.25 MiB/s 00:35:23.339 Latency(us) 00:35:23.339 [2024-11-02T13:51:15.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:23.339 nvme0n1 : 2.01 3241.46 405.18 0.00 0.00 4931.55 1529.17 11602.30 00:35:23.339 [2024-11-02T13:51:15.394Z] =================================================================================================================== 00:35:23.339 [2024-11-02T13:51:15.394Z] Total : 3241.46 405.18 0.00 0.00 4931.55 1529.17 11602.30 00:35:23.339 { 00:35:23.339 "results": [ 00:35:23.339 { 00:35:23.339 "job": "nvme0n1", 00:35:23.339 "core_mask": "0x2", 00:35:23.339 "workload": "randread", 00:35:23.339 "status": "finished", 00:35:23.339 "queue_depth": 16, 00:35:23.339 "io_size": 131072, 00:35:23.339 "runtime": 2.005577, 00:35:23.339 "iops": 3241.461185484277, 00:35:23.339 "mibps": 405.1826481855346, 00:35:23.339 "io_failed": 0, 00:35:23.339 "io_timeout": 0, 00:35:23.339 "avg_latency_us": 4931.546943319261, 00:35:23.339 "min_latency_us": 1529.1733333333334, 00:35:23.339 "max_latency_us": 11602.29925925926 00:35:23.339 } 00:35:23.339 ], 00:35:23.339 "core_count": 1 00:35:23.339 } 00:35:23.339 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.339 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.339 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:23.339 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.339 | .driver_specific 00:35:23.339 | .nvme_error 00:35:23.339 | .status_code 00:35:23.339 | .command_transient_transport_error' 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1528360 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1528360 ']' 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1528360 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528360 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528360' 00:35:23.597 killing process with pid 1528360 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1528360 00:35:23.597 Received shutdown signal, test time was about 2.000000 seconds 00:35:23.597 00:35:23.597 Latency(us) 00:35:23.597 [2024-11-02T13:51:15.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.597 [2024-11-02T13:51:15.652Z] =================================================================================================================== 00:35:23.597 [2024-11-02T13:51:15.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.597 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1528360 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1528875 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1528875 /var/tmp/bperf.sock 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1528875 ']' 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:23.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.854 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.854 [2024-11-02 14:51:15.863537] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:23.854 [2024-11-02 14:51:15.863655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528875 ] 00:35:24.111 [2024-11-02 14:51:15.926079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.111 [2024-11-02 14:51:16.015979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.111 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.111 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:24.111 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.111 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.676 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.935 nvme0n1 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:24.935 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:24.935 Running I/O for 2 seconds... 00:35:24.935 [2024-11-02 14:51:16.963231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:24.935 [2024-11-02 14:51:16.963537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.935 [2024-11-02 14:51:16.963574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:24.935 [2024-11-02 14:51:16.978932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:24.935 [2024-11-02 14:51:16.979205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.935 [2024-11-02 14:51:16.979239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:16.995067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:16.995353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:16.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.010617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.010882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.010916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.026232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.026515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.026558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.041819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.042081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.042113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.057202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.057492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.072720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.072982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.088022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.088283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.088327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.103401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.103672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.103704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.118856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.119115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.119153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.134335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.134609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.134641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.149688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.149951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.165008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.165280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.165325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.180388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.180642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.180675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.195734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.195996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.196038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.211065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.211363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.226460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.226728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.226770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.194 [2024-11-02 14:51:17.241868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.194 [2024-11-02 14:51:17.242133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.194 [2024-11-02 14:51:17.242165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.257761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.258040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.258073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.273146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.273416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.273443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.288417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.288683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.288716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.303767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.304028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.304061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.319287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.319562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.319594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.334745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.335004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.335036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.349985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.350245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.350301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.365476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.365747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.365778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.380804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.381093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.396188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.396467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.396496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.411508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.411792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.411825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.426834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.427094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.427126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.442122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.442409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.442439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.457338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.457611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.457644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.472625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.472886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.453 [2024-11-02 14:51:17.472919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.453 [2024-11-02 14:51:17.488016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.453 [2024-11-02 14:51:17.488296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.454 [2024-11-02 14:51:17.488326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.454 [2024-11-02 14:51:17.503404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.454 [2024-11-02 14:51:17.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.454 [2024-11-02 14:51:17.503755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.712 [2024-11-02 14:51:17.519301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.712 [2024-11-02 14:51:17.519568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.712 [2024-11-02 14:51:17.519623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.712 [2024-11-02 14:51:17.534579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.712 [2024-11-02 14:51:17.534837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.712 [2024-11-02 14:51:17.534869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.549802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.550095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.565096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.565416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.580367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.580624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.580651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.595757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.596022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.596056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.611212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.611489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.611520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.626670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.626930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.642024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.642283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.642326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.657344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.657616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.657649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.672727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.672989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.673022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.688119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.688414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.688443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.703415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.703690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.703722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.718735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.718995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.734002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.734273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.734322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.749357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.749630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.749662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.713 [2024-11-02 14:51:17.764971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.713 [2024-11-02 14:51:17.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.713 [2024-11-02 14:51:17.765274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.780618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.780879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.795933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.796194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.796228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.811366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.811627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.811660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.826731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.827026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.842096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.842371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.842399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.857508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.857775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.857809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.872873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.873139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.873172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.888696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.888955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.888988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.904090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.904375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.904405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.919412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.919686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.919718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.934763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.935021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.935053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 16515.00 IOPS, 64.51 MiB/s [2024-11-02T13:51:18.027Z] [2024-11-02 14:51:17.949979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.950239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.950282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.965462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.965743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.965774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.980848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.981109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.981142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:17.996126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:17.996411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:17.996439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:25.972 [2024-11-02 14:51:18.011366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:25.972 [2024-11-02 14:51:18.011636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:25.972 [2024-11-02 14:51:18.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.026997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.027299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.027329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.042482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.042749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.042781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.057802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.058065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.058103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.073226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.073502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.073530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.088662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.088929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.088962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.103939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.104199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.119342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.119609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.119636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.134618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.134877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.134910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.149935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.150200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.150233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.165236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.165503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.165532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.180507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.180805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.195815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.196114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.211132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.211417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.211446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.226425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.226700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.226733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.241670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.231 [2024-11-02 14:51:18.241930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.231 [2024-11-02 14:51:18.241963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.231 [2024-11-02 14:51:18.256967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.232 [2024-11-02 14:51:18.257227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.232 [2024-11-02 14:51:18.257267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.232 [2024-11-02 14:51:18.272345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.232 [2024-11-02 14:51:18.272616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.232 [2024-11-02 14:51:18.272649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.288047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.490 [2024-11-02 14:51:18.288323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.490 [2024-11-02 14:51:18.288355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.303407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.490 [2024-11-02 14:51:18.303668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.490 [2024-11-02 14:51:18.303702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.318778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.490 [2024-11-02 14:51:18.319038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.490 [2024-11-02 14:51:18.319071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.334227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.490 [2024-11-02 14:51:18.334527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.490 [2024-11-02 14:51:18.334554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.349521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.490 [2024-11-02 14:51:18.349802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.490 [2024-11-02 14:51:18.349834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.490 [2024-11-02 14:51:18.364807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.365069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.380147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.380431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.380460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.395411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.395683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.395714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.410645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.410920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.410952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.425816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.426106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.441172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.441440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.441468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.456518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.456797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.456835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.471898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.472159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.472190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.487329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.487605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.487638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.502610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.502873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.502906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.517860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.518153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.491 [2024-11-02 14:51:18.533119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.491 [2024-11-02 14:51:18.533398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.491 [2024-11-02 14:51:18.533428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.548934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.549196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.549229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.564336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.564616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.564648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.579753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.580012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.580043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.595179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.595462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.595491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.610422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.610713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.610746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.625715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.749 [2024-11-02 14:51:18.625974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.749 [2024-11-02 14:51:18.626006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.749 [2024-11-02 14:51:18.640926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.641195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.641227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.656316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.656571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.671641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.671901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.671933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.687070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.687341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.687368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.702323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.702604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.702637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.717686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.717950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.717982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.732983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.733242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.733282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.748273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.748527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.748569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.763621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.763879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.763909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.778903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.779166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.779197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:26.750 [2024-11-02 14:51:18.794181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:26.750 [2024-11-02 14:51:18.794461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.750 [2024-11-02 14:51:18.794506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.809804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.810069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.810102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.825055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.825328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.825355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.840353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.840626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.840659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.855650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.855913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.870885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.871149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.871181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.886231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.886545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.901666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.901927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.901959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.010 [2024-11-02 14:51:18.916935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.010 [2024-11-02 14:51:18.917195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.010 [2024-11-02 14:51:18.917228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.011 [2024-11-02 14:51:18.932315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.011 [2024-11-02 14:51:18.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.011 [2024-11-02 14:51:18.932600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.011 [2024-11-02 14:51:18.947719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xede790) with pdu=0x2000198fe2e8 00:35:27.011 [2024-11-02 14:51:18.947978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.011 [2024-11-02 14:51:18.948010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:27.011 16584.50 IOPS, 64.78 MiB/s 00:35:27.011 Latency(us) 00:35:27.011 [2024-11-02T13:51:19.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:27.011 nvme0n1 : 2.01 16588.60 64.80 0.00 0.00 7697.64 3543.80 16019.91 00:35:27.011 [2024-11-02T13:51:19.066Z] =================================================================================================================== 00:35:27.011 [2024-11-02T13:51:19.066Z] Total : 16588.60 64.80 0.00 0.00 7697.64 3543.80 16019.91 00:35:27.011 { 00:35:27.011 "results": [ 00:35:27.011 { 00:35:27.011 "job": "nvme0n1", 00:35:27.011 "core_mask": "0x2", 00:35:27.011 "workload": "randwrite", 00:35:27.011 "status": "finished", 00:35:27.011 "queue_depth": 128, 00:35:27.011 "io_size": 4096, 00:35:27.011 "runtime": 2.009151, 00:35:27.011 "iops": 16588.598865889126, 00:35:27.011 "mibps": 64.7992143198794, 00:35:27.011 "io_failed": 0, 00:35:27.011 "io_timeout": 0, 00:35:27.011 "avg_latency_us": 7697.644289135366, 00:35:27.011 "min_latency_us": 3543.7985185185184, 00:35:27.011 "max_latency_us": 16019.91111111111 00:35:27.011 } 00:35:27.011 ], 00:35:27.011 "core_count": 1 00:35:27.011 } 00:35:27.011 14:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:27.011 14:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:27.011 14:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:27.011 14:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:27.011 | .driver_specific 00:35:27.011 | .nvme_error 00:35:27.011 | .status_code 00:35:27.012 | .command_transient_transport_error' 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1528875 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1528875 ']' 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1528875 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528875 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528875' 00:35:27.277 killing process with pid 1528875 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1528875 00:35:27.277 Received shutdown signal, test time was about 2.000000 seconds 00:35:27.277 00:35:27.277 Latency(us) 00:35:27.277 [2024-11-02T13:51:19.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.277 [2024-11-02T13:51:19.332Z] =================================================================================================================== 00:35:27.277 [2024-11-02T13:51:19.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.277 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1528875 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1529284 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1529284 /var/tmp/bperf.sock 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1529284 ']' 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.535 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.535 [2024-11-02 14:51:19.542249] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:27.535 [2024-11-02 14:51:19.542361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529284 ] 00:35:27.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:27.535 Zero copy mechanism will not be used. 00:35:27.793 [2024-11-02 14:51:19.602740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.793 [2024-11-02 14:51:19.694346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.793 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.793 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:27.793 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:27.793 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:28.052 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:28.618 nvme0n1 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:28.618 14:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:28.876 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:28.876 Zero copy mechanism will not be used. 00:35:28.876 Running I/O for 2 seconds... 00:35:28.876 [2024-11-02 14:51:20.699942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.876 [2024-11-02 14:51:20.700339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.876 [2024-11-02 14:51:20.700394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.876 [2024-11-02 14:51:20.709927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.876 [2024-11-02 14:51:20.710124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.710153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.719911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.720112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.720140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.730143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.730491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.730537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.740463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.740824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.751525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.751869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.751900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.762286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.762621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.762653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.772710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.773084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.773128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.783348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.783738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.794026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.794411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.794442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.804100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.804471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.804517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.815513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.815871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.815900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.825834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.826173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.826222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.836370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.836755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.847002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.847380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.856660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.857027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.857057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.866804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.867040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.867070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.876249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.876671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.885234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.885658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.885695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.894658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.895049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.895081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.903471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.903877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.912495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.912842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.912872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.920922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.921272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.921308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.877 [2024-11-02 14:51:20.930028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:28.877 [2024-11-02 14:51:20.930342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.877 [2024-11-02 14:51:20.930372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.939286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.939696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.949167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.949566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.949597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.958040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.958487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.958531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.967681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.968036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.968066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.976879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.977232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.977271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.985778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.986115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.986146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:20.994958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:20.995335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.136 [2024-11-02 14:51:20.995366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.136 [2024-11-02 14:51:21.003391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.136 [2024-11-02 14:51:21.003754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.012228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.012514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.012544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.020766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.021126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.021156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.029108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.029508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.029539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.037955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.038355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.038386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.046549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.046924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.046954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.055199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.055530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.064005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.064320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.064351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.072597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.072922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.072953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.080733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.080994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.081024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.088262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.088581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.088611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.096432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.096767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.096811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.105297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.105671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.105701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.114700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.115191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.115227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.124564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.124966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.124996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.133230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.133591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.142845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.143229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.152166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.152471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.152502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.161901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.162307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.162337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.171833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.172219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.172270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.137 [2024-11-02 14:51:21.182063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.137 [2024-11-02 14:51:21.182424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.137 [2024-11-02 14:51:21.182455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.190947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.191232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.191273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.199582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.199975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.200020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.208118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.208471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.208501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.217494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.217804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.217834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.227406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.227803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.227833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.237023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.237396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.237427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.246567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.246935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.246966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.255584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.255943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.255973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.264627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.264987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.265017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.396 [2024-11-02 14:51:21.272873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.396 [2024-11-02 14:51:21.273235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.396 [2024-11-02 14:51:21.273272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.281868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.282139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.282169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.290347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.290604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.290634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.298974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.299280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.299312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.307304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.307587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.307617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.315252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.315554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.315585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.323714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.324051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.324082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.332842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.333183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.333213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.341984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.342330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.342362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.350763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.351106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.351142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.359335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.359706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.359735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.368732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.369053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.369083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.378141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.378428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.378457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.387725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.388102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.388132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.396833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.397156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.397186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.406384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.406651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.406682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.415715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.416048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.416078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.425176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.425512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.425542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.434540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.434903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.434934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.397 [2024-11-02 14:51:21.443208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.397 [2024-11-02 14:51:21.443480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.397 [2024-11-02 14:51:21.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.451779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.452167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.460768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.461136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.461181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.469823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.470176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.470206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.477921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.478328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.478358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.486706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.486977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.487008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.495960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.656 [2024-11-02 14:51:21.496293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.656 [2024-11-02 14:51:21.496324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.656 [2024-11-02 14:51:21.505123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.505529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.505566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.514269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.514688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.514718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.523776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.524150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.524180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.533620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.534052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.534082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.543390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.543727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.543757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.553170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.553560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.553605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.562924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.563373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.572290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.572714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.582349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.582740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.582771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.591588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.591884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.591916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.601585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.601819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.611368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.611737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.611782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.621040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.621364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.621395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.630798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.631192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.631222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.640247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.640588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.640618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.650125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.650470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.650500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.660137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.660447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.660478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.668758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.669053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.669083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.677163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.677474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.677505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.686784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.687070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.687100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.657 3328.00 IOPS, 416.00 MiB/s [2024-11-02T13:51:21.712Z] [2024-11-02 14:51:21.696561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.696827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.696859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.657 [2024-11-02 14:51:21.705471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.657 [2024-11-02 14:51:21.705891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.657 [2024-11-02 14:51:21.705920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.714964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.929 [2024-11-02 14:51:21.715304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.929 [2024-11-02 14:51:21.715337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.724080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.929 [2024-11-02 14:51:21.724491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.929 [2024-11-02 14:51:21.724537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.732971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.929 [2024-11-02 14:51:21.733375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.929 [2024-11-02 14:51:21.733406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.742452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.929 [2024-11-02 14:51:21.742830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.929 [2024-11-02 14:51:21.742860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.751829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.929 [2024-11-02 14:51:21.752130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.929 [2024-11-02 14:51:21.752168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.929 [2024-11-02 14:51:21.761712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.761997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.762028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.770907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.771145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.771175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.780182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.780475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.788693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.789052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.797654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.797997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.798028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.806346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.806779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.806811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.815553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.815861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.815891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.824274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.824663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.824708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.832737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.833013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.840761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.841114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.841143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.848618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.848915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.848946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.857206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.930 [2024-11-02 14:51:21.857521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.930 [2024-11-02 14:51:21.857552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.930 [2024-11-02 14:51:21.865609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.865915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.865946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.874217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.874545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.874576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.882801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.883138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.883168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.892178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.892566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.892611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.902375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.902716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.902747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.911706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.931 [2024-11-02 14:51:21.912038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.931 [2024-11-02 14:51:21.912069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.931 [2024-11-02 14:51:21.921292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.921619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.921649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.930900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.931252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.931291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.939838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.940138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.940169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.948264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.948624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.948653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.957714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.958101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.958131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.967364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.967730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.932 [2024-11-02 14:51:21.967771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.932 [2024-11-02 14:51:21.977519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:29.932 [2024-11-02 14:51:21.977845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.933 [2024-11-02 14:51:21.977875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:21.987010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:21.987331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:21.987370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:21.996967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:21.997276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:21.997316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.005655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.005992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.015141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.015447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.015477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.024770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.025173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.025203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.033478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.033858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.033887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.042736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.043017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.043047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.051655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.051966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.052012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.061104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.061453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.061484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.070423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.070724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.070755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.080215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.080527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.080558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.088942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.089291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.089322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.099563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.099990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.100020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.109669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.109978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.119568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.119906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.119937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.129328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.129607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.129638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.193 [2024-11-02 14:51:22.139302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.193 [2024-11-02 14:51:22.139608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.193 [2024-11-02 14:51:22.139638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.149057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.149439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.149469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.159224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.159609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.159640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.168950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.169306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.169336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.178117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.178427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.178457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.187488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.187830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.187859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.197713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.198085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.198115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.207845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.208213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.208243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.217657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.217983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.218013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.227500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.227831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.227861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.237232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.237536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.237571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.194 [2024-11-02 14:51:22.247018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.194 [2024-11-02 14:51:22.247365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.194 [2024-11-02 14:51:22.247396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.256548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.256944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.256974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.266516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.266803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.266832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.276557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.276896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.276925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.285909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.286169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.286199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.295502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.295875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.295907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.304968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.305224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.305254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.315088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.315398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.315427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.325613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.325976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.326005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.335863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.336224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.336278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.346017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.346365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.346395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.356500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.356881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.356911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.367003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.367450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.377562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.377948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.387418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.387910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.396923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.397207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.397238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.452 [2024-11-02 14:51:22.406518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.452 [2024-11-02 14:51:22.406842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.452 [2024-11-02 14:51:22.406871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.416420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.416828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.416857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.426074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.426394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.426425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.435103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.435316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.435344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.444923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.445200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.445230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.453798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.454181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.454212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.463499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.463864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.463894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.473680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.474086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.484352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.484753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.484798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.494096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.494466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.494510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.453 [2024-11-02 14:51:22.503644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.453 [2024-11-02 14:51:22.503887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.453 [2024-11-02 14:51:22.503918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.512385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.512685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.512714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.522468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.522775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.522804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.531442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.531772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.531802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.541219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.541601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.541631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.551149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.551496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.551525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.559501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.559787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.559818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.568597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.568899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.568929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.577948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.578308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.578346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.586762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.587098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.587127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.596358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.596695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.596725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.605455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.605826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.605856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.615386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.615802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.623652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.623924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.623953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.633507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.633864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.633893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.642128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.642425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.642455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.651592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.651902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.651932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.660431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.660764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.660794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.668849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.669114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.669143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.677955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.678217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.678247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.712 [2024-11-02 14:51:22.687002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.687379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.712 3311.00 IOPS, 413.88 MiB/s [2024-11-02T13:51:22.767Z] [2024-11-02 14:51:22.696894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xedead0) with pdu=0x2000198fef90 00:35:30.712 [2024-11-02 14:51:22.697216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.712 [2024-11-02 14:51:22.697247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.712 00:35:30.712 Latency(us) 00:35:30.712 [2024-11-02T13:51:22.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.712 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:30.712 nvme0n1 : 2.01 3310.90 413.86 0.00 0.00 4821.62 2475.80 11213.94 00:35:30.712 [2024-11-02T13:51:22.767Z] =================================================================================================================== 00:35:30.712 [2024-11-02T13:51:22.767Z] Total : 3310.90 413.86 0.00 0.00 4821.62 2475.80 11213.94 00:35:30.712 { 00:35:30.712 "results": [ 00:35:30.712 { 00:35:30.712 "job": "nvme0n1", 00:35:30.712 "core_mask": "0x2", 00:35:30.712 "workload": "randwrite", 00:35:30.712 "status": "finished", 00:35:30.712 "queue_depth": 16, 00:35:30.712 "io_size": 131072, 00:35:30.712 "runtime": 2.006104, 00:35:30.712 "iops": 3310.8951480082787, 00:35:30.712 "mibps": 413.86189350103484, 00:35:30.712 "io_failed": 0, 00:35:30.712 "io_timeout": 0, 00:35:30.712 "avg_latency_us": 4821.617863428017, 00:35:30.712 "min_latency_us": 2475.8044444444445, 00:35:30.712 "max_latency_us": 11213.937777777777 00:35:30.712 } 00:35:30.712 ], 00:35:30.712 "core_count": 1 00:35:30.712 } 00:35:30.712 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:30.712 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:30.713 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:30.713 | .driver_specific 00:35:30.713 | .nvme_error 00:35:30.713 | .status_code 00:35:30.713 | .command_transient_transport_error' 00:35:30.713 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:30.970 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:30.970 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1529284 00:35:30.970 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1529284 ']' 00:35:30.970 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1529284 00:35:30.971 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:30.971 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:30.971 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529284 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529284' 00:35:31.229 killing process with pid 1529284 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1529284 00:35:31.229 Received shutdown signal, test time was about 2.000000 seconds 00:35:31.229 00:35:31.229 Latency(us) 00:35:31.229 [2024-11-02T13:51:23.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.229 [2024-11-02T13:51:23.284Z] =================================================================================================================== 00:35:31.229 [2024-11-02T13:51:23.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1529284 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1527710 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1527710 ']' 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1527710 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:31.229 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527710 00:35:31.488 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:31.488 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:31.488 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527710' 00:35:31.488 killing process with pid 1527710 00:35:31.488 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1527710 00:35:31.488 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1527710 00:35:31.746 00:35:31.746 real 0m15.748s 00:35:31.746 user 0m31.348s 00:35:31.746 sys 0m4.333s 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.746 ************************************ 00:35:31.746 END TEST nvmf_digest_error 00:35:31.746 ************************************ 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.746 rmmod nvme_tcp 00:35:31.746 rmmod nvme_fabrics 00:35:31.746 rmmod nvme_keyring 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 1527710 ']' 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 1527710 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1527710 ']' 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1527710 00:35:31.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1527710) - No such process 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1527710 is not found' 00:35:31.746 Process with pid 1527710 is not found 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.746 14:51:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.648 14:51:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:33.648 00:35:33.648 real 0m36.008s 00:35:33.648 user 1m3.430s 00:35:33.648 sys 0m10.209s 00:35:33.648 14:51:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:33.648 14:51:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:33.648 ************************************ 00:35:33.648 END TEST nvmf_digest 00:35:33.648 ************************************ 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.907 ************************************ 00:35:33.907 START TEST nvmf_bdevperf 00:35:33.907 ************************************ 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:33.907 * Looking for test storage... 00:35:33.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.907 --rc genhtml_branch_coverage=1 00:35:33.907 --rc genhtml_function_coverage=1 00:35:33.907 --rc genhtml_legend=1 00:35:33.907 --rc geninfo_all_blocks=1 00:35:33.907 --rc geninfo_unexecuted_blocks=1 00:35:33.907 00:35:33.907 ' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.907 --rc genhtml_branch_coverage=1 00:35:33.907 --rc genhtml_function_coverage=1 00:35:33.907 --rc genhtml_legend=1 00:35:33.907 --rc geninfo_all_blocks=1 00:35:33.907 --rc geninfo_unexecuted_blocks=1 00:35:33.907 00:35:33.907 ' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.907 --rc genhtml_branch_coverage=1 00:35:33.907 --rc genhtml_function_coverage=1 00:35:33.907 --rc genhtml_legend=1 00:35:33.907 --rc geninfo_all_blocks=1 00:35:33.907 --rc geninfo_unexecuted_blocks=1 00:35:33.907 00:35:33.907 ' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.907 --rc genhtml_branch_coverage=1 00:35:33.907 --rc genhtml_function_coverage=1 00:35:33.907 --rc genhtml_legend=1 00:35:33.907 --rc geninfo_all_blocks=1 00:35:33.907 --rc geninfo_unexecuted_blocks=1 00:35:33.907 00:35:33.907 ' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.907 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:33.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:33.908 14:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:36.440 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:36.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:36.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:36.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:36.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.441 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:35:36.441 00:35:36.441 --- 10.0.0.2 ping statistics --- 00:35:36.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.441 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:35:36.441 00:35:36.441 --- 10.0.0.1 ping statistics --- 00:35:36.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.441 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1531684 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1531684 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1531684 ']' 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:36.441 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.441 [2024-11-02 14:51:28.349197] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:36.441 [2024-11-02 14:51:28.349325] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.441 [2024-11-02 14:51:28.423435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:36.700 [2024-11-02 14:51:28.510010] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.700 [2024-11-02 14:51:28.510077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.700 [2024-11-02 14:51:28.510101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.700 [2024-11-02 14:51:28.510112] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.700 [2024-11-02 14:51:28.510121] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.700 [2024-11-02 14:51:28.510171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.700 [2024-11-02 14:51:28.510226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.700 [2024-11-02 14:51:28.510229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 [2024-11-02 14:51:28.641226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 Malloc0 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.700 [2024-11-02 14:51:28.699498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:36.700 { 00:35:36.700 "params": { 00:35:36.700 "name": "Nvme$subsystem", 00:35:36.700 "trtype": "$TEST_TRANSPORT", 00:35:36.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.700 "adrfam": "ipv4", 00:35:36.700 "trsvcid": "$NVMF_PORT", 00:35:36.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.700 "hdgst": ${hdgst:-false}, 00:35:36.700 "ddgst": ${ddgst:-false} 00:35:36.700 }, 00:35:36.700 "method": "bdev_nvme_attach_controller" 00:35:36.700 } 00:35:36.700 EOF 00:35:36.700 )") 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:36.700 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:36.700 "params": { 00:35:36.700 "name": "Nvme1", 00:35:36.700 "trtype": "tcp", 00:35:36.700 "traddr": "10.0.0.2", 00:35:36.700 "adrfam": "ipv4", 00:35:36.700 "trsvcid": "4420", 00:35:36.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.700 "hdgst": false, 00:35:36.700 "ddgst": false 00:35:36.700 }, 00:35:36.700 "method": "bdev_nvme_attach_controller" 00:35:36.700 }' 00:35:36.700 [2024-11-02 14:51:28.745896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:36.700 [2024-11-02 14:51:28.745981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531789 ] 00:35:36.959 [2024-11-02 14:51:28.807456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.959 [2024-11-02 14:51:28.893237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.218 Running I/O for 1 seconds... 00:35:38.227 8060.00 IOPS, 31.48 MiB/s 00:35:38.227 Latency(us) 00:35:38.227 [2024-11-02T13:51:30.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:38.227 Verification LBA range: start 0x0 length 0x4000 00:35:38.227 Nvme1n1 : 1.01 8081.01 31.57 0.00 0.00 15775.72 3737.98 15340.28 00:35:38.227 [2024-11-02T13:51:30.282Z] =================================================================================================================== 00:35:38.227 [2024-11-02T13:51:30.282Z] Total : 8081.01 31.57 0.00 0.00 15775.72 3737.98 15340.28 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1531943 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:38.486 { 00:35:38.486 "params": { 00:35:38.486 "name": "Nvme$subsystem", 00:35:38.486 "trtype": "$TEST_TRANSPORT", 00:35:38.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.486 "adrfam": "ipv4", 00:35:38.486 "trsvcid": "$NVMF_PORT", 00:35:38.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.486 "hdgst": ${hdgst:-false}, 00:35:38.486 "ddgst": ${ddgst:-false} 00:35:38.486 }, 00:35:38.486 "method": "bdev_nvme_attach_controller" 00:35:38.486 } 00:35:38.486 EOF 00:35:38.486 )") 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:38.486 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:38.486 "params": { 00:35:38.486 "name": "Nvme1", 00:35:38.486 "trtype": "tcp", 00:35:38.486 "traddr": "10.0.0.2", 00:35:38.486 "adrfam": "ipv4", 00:35:38.486 "trsvcid": "4420", 00:35:38.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.486 "hdgst": false, 00:35:38.486 "ddgst": false 00:35:38.486 }, 00:35:38.486 "method": "bdev_nvme_attach_controller" 00:35:38.486 }' 00:35:38.486 [2024-11-02 14:51:30.479336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:38.486 [2024-11-02 14:51:30.479433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531943 ] 00:35:38.744 [2024-11-02 14:51:30.543372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.744 [2024-11-02 14:51:30.630619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.002 Running I/O for 15 seconds... 00:35:41.310 8368.00 IOPS, 32.69 MiB/s [2024-11-02T13:51:33.626Z] 8416.50 IOPS, 32.88 MiB/s [2024-11-02T13:51:33.626Z] 14:51:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1531684 00:35:41.571 14:51:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:41.571 [2024-11-02 14:51:33.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.571 [2024-11-02 14:51:33.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.450974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.450989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.451021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.451053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.451086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.451117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.571 [2024-11-02 14:51:33.451149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.571 [2024-11-02 14:51:33.451165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.572 [2024-11-02 14:51:33.451803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.451980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.451994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.572 [2024-11-02 14:51:33.452486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.572 [2024-11-02 14:51:33.452502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.452515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.452583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.452616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.452972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.452990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.573 [2024-11-02 14:51:33.453668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.573 [2024-11-02 14:51:33.453863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.573 [2024-11-02 14:51:33.453880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.453895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.453911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.453926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.453943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.453958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.453982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.453999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.574 [2024-11-02 14:51:33.454440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1422980 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.454472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.574 [2024-11-02 14:51:33.454484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.574 [2024-11-02 14:51:33.454496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38536 len:8 PRP1 0x0 PRP2 0x0 00:35:41.574 [2024-11-02 14:51:33.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454596] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1422980 was disconnected and freed. reset controller. 00:35:41.574 [2024-11-02 14:51:33.454683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.574 [2024-11-02 14:51:33.454707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.574 [2024-11-02 14:51:33.454740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.574 [2024-11-02 14:51:33.454770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.574 [2024-11-02 14:51:33.454799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.574 [2024-11-02 14:51:33.454813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.458668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.574 [2024-11-02 14:51:33.458711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.574 [2024-11-02 14:51:33.459390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.574 [2024-11-02 14:51:33.459419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.574 [2024-11-02 14:51:33.459436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.459681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.574 [2024-11-02 14:51:33.459923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.574 [2024-11-02 14:51:33.459946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.574 [2024-11-02 14:51:33.459965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.574 [2024-11-02 14:51:33.463511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.574 [2024-11-02 14:51:33.472738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.574 [2024-11-02 14:51:33.473175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.574 [2024-11-02 14:51:33.473209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.574 [2024-11-02 14:51:33.473228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.473479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.574 [2024-11-02 14:51:33.473722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.574 [2024-11-02 14:51:33.473746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.574 [2024-11-02 14:51:33.473761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.574 [2024-11-02 14:51:33.477316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.574 [2024-11-02 14:51:33.486735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.574 [2024-11-02 14:51:33.487246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.574 [2024-11-02 14:51:33.487286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.574 [2024-11-02 14:51:33.487321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.487570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.574 [2024-11-02 14:51:33.487827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.574 [2024-11-02 14:51:33.487852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.574 [2024-11-02 14:51:33.487868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.574 [2024-11-02 14:51:33.491433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.574 [2024-11-02 14:51:33.500646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.574 [2024-11-02 14:51:33.501091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.574 [2024-11-02 14:51:33.501130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.574 [2024-11-02 14:51:33.501149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.574 [2024-11-02 14:51:33.501403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.574 [2024-11-02 14:51:33.501645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.574 [2024-11-02 14:51:33.501670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.574 [2024-11-02 14:51:33.501686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.574 [2024-11-02 14:51:33.505232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.574 [2024-11-02 14:51:33.514661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.515099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.515132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.515150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.515401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.515643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.515668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.515684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.519229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.528657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.529072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.529105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.529124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.529375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.529617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.529642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.529658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.533208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.542658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.543074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.543107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.543126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.543379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.543627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.543653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.543669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.547218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.556641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.557046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.557079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.557096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.557350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.557605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.557631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.557648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.561198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.570624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.571059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.571092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.571111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.571363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.571605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.571630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.571647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.575194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.584619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.585056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.585088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.585107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.585359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.585600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.585626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.585642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.589187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.598631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.599046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.599080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.599098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.599348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.599590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.599615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.599631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.603177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.575 [2024-11-02 14:51:33.612607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.575 [2024-11-02 14:51:33.613035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.575 [2024-11-02 14:51:33.613067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.575 [2024-11-02 14:51:33.613085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.575 [2024-11-02 14:51:33.613339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.575 [2024-11-02 14:51:33.613581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.575 [2024-11-02 14:51:33.613606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.575 [2024-11-02 14:51:33.613623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.575 [2024-11-02 14:51:33.617167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.834 [2024-11-02 14:51:33.626661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.834 [2024-11-02 14:51:33.627098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.834 [2024-11-02 14:51:33.627133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.834 [2024-11-02 14:51:33.627152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.834 [2024-11-02 14:51:33.627405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.834 [2024-11-02 14:51:33.627647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.834 [2024-11-02 14:51:33.627672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.834 [2024-11-02 14:51:33.627689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.834 [2024-11-02 14:51:33.631240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.834 [2024-11-02 14:51:33.640553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.834 [2024-11-02 14:51:33.641010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.834 [2024-11-02 14:51:33.641043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.834 [2024-11-02 14:51:33.641068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.641322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.641564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.641590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.641606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.645177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.654408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.654825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.654859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.654878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.655117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.655376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.655402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.655419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.658966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.668396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.668803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.668837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.668856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.669094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.669351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.669378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.669394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.672944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.682371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.682818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.682850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.682869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.683106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.683364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.683396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.683413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.686961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.696184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.696623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.696656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.696675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.696913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.697155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.697180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.697196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.700761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.710190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.710625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.710658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.710678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.710916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.711158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.711183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.711199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.714760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.724184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.724619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.724652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.724670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.724908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.725149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.725174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.725190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.728752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.738178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.738628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.738660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.738678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.738916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.739158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.739182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.739198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.742774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.751998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.752408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.752441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.752459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.752697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.752940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.752963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.752980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.756544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.765986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.766439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.766472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.766491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.766729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.835 [2024-11-02 14:51:33.766971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.835 [2024-11-02 14:51:33.766995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.835 [2024-11-02 14:51:33.767011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.835 [2024-11-02 14:51:33.770573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.835 [2024-11-02 14:51:33.780000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.835 [2024-11-02 14:51:33.780450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.835 [2024-11-02 14:51:33.780482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.835 [2024-11-02 14:51:33.780501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.835 [2024-11-02 14:51:33.780745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.780988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.781011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.781027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.784581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.793992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.794428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.794461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.794480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.794717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.794961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.794985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.795000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.798555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.807961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.808396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.808429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.808448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.808686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.808927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.808952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.808968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.812529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.821957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.822383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.822415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.822434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.822671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.822912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.822937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.822959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.826520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.835957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.836406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.836439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.836458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.836696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.836937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.836962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.836978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.840532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.849969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.850425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.850458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.850477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.850714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.850955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.850981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.850997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.854565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.863987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.864426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.864461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.864479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.864717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.864958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.864983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.864999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.868562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.836 [2024-11-02 14:51:33.877984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.836 [2024-11-02 14:51:33.878398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.836 [2024-11-02 14:51:33.878437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:41.836 [2024-11-02 14:51:33.878456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:41.836 [2024-11-02 14:51:33.878694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:41.836 [2024-11-02 14:51:33.878937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.836 [2024-11-02 14:51:33.878962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.836 [2024-11-02 14:51:33.878979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.836 [2024-11-02 14:51:33.882543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.095 [2024-11-02 14:51:33.892002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.095 [2024-11-02 14:51:33.892429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.095 [2024-11-02 14:51:33.892466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.095 [2024-11-02 14:51:33.892486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.095 [2024-11-02 14:51:33.892724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.095 [2024-11-02 14:51:33.892966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.095 [2024-11-02 14:51:33.892991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.095 [2024-11-02 14:51:33.893007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.095 [2024-11-02 14:51:33.896624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.095 [2024-11-02 14:51:33.905840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.095 [2024-11-02 14:51:33.906330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.095 [2024-11-02 14:51:33.906363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.095 [2024-11-02 14:51:33.906381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.095 [2024-11-02 14:51:33.906620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.095 [2024-11-02 14:51:33.906861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.095 [2024-11-02 14:51:33.906886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.095 [2024-11-02 14:51:33.906902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.095 [2024-11-02 14:51:33.910461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.095 [2024-11-02 14:51:33.919670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.095 [2024-11-02 14:51:33.920107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.920140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.920159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.920410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.920658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.920684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.920700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.924247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:33.933668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:33.934072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.934104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.934122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.934375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.934617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.934643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.934660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.938209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 7048.67 IOPS, 27.53 MiB/s [2024-11-02T13:51:34.151Z] [2024-11-02 14:51:33.949438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:33.949875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.949908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.949927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.950164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.950418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.950443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.950460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.954022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:33.963250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:33.963702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.963734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.963753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.963991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.964235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.964268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.964298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.967856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:33.977122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:33.977530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.977563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.977582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.977819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.978062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.978086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.978101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.981674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:33.991111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:33.991545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:33.991577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:33.991595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:33.991833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:33.992076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:33.992112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:33.992128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:33.995685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:34.005111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:34.005519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:34.005552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:34.005570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:34.005808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:34.006050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:34.006075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:34.006091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:34.009647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:34.018928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:34.019369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:34.019409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:34.019428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:34.019666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:34.019909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:34.019933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:34.019949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:34.023507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:34.032940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:34.033375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:34.033409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:34.033428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:34.033666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:34.033908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:34.033932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:34.033948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:34.037502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:34.046942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:34.047376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:34.047410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.096 [2024-11-02 14:51:34.047428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.096 [2024-11-02 14:51:34.047667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.096 [2024-11-02 14:51:34.047910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.096 [2024-11-02 14:51:34.047936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.096 [2024-11-02 14:51:34.047952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.096 [2024-11-02 14:51:34.051508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.096 [2024-11-02 14:51:34.060925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.096 [2024-11-02 14:51:34.061352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.096 [2024-11-02 14:51:34.061395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.061413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.061651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.061898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.061924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.061940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.065498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.074926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.075347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.075384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.075403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.075640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.075881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.075906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.075923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.079562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.088812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.089242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.089291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.089311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.089549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.089793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.089817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.089833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.093393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.102829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.103273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.103318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.103337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.103574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.103817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.103848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.103863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.107429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.116676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.117116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.117149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.117168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.117418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.117663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.117688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.117704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.121252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.130695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.131134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.131166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.131184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.131433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.131675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.131701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.131716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.135285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.097 [2024-11-02 14:51:34.144528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.097 [2024-11-02 14:51:34.144963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.097 [2024-11-02 14:51:34.144996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.097 [2024-11-02 14:51:34.145015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.097 [2024-11-02 14:51:34.145292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.097 [2024-11-02 14:51:34.145557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.097 [2024-11-02 14:51:34.145583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.097 [2024-11-02 14:51:34.145600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.097 [2024-11-02 14:51:34.149276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.356 [2024-11-02 14:51:34.158416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.356 [2024-11-02 14:51:34.158824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.356 [2024-11-02 14:51:34.158858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.158884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.159123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.159381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.159408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.159424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.162971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.172398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.172805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.172840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.172860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.173099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.173357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.173383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.173399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.176946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.186376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.186808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.186841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.186860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.187099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.187355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.187381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.187397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.190954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.200382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.200820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.200852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.200870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.201108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.201364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.201396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.201414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.204961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.214209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.214653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.214686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.214705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.214943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.215185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.215212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.215230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.218796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.228028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.228493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.228536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.228556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.228796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.229039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.229063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.229080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.232652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.241881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.242330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.242363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.242381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.242624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.242866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.242891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.242907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.246496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.255725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.256167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.256201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.256219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.256469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.256711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.256737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.256753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.260308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.269725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.270136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.270169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.270188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.270438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.270679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.270704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.270721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.274274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.283689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.284099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.284134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.284154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.357 [2024-11-02 14:51:34.284405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.357 [2024-11-02 14:51:34.284649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.357 [2024-11-02 14:51:34.284674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.357 [2024-11-02 14:51:34.284691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.357 [2024-11-02 14:51:34.288238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.357 [2024-11-02 14:51:34.297683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.357 [2024-11-02 14:51:34.298089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.357 [2024-11-02 14:51:34.298122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.357 [2024-11-02 14:51:34.298141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.298399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.298643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.298669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.298685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.302233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.311657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.312094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.312127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.312146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.312397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.312641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.312666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.312682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.316228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.325648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.326061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.326095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.326113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.326364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.326607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.326633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.326649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.330192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.339612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.339996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.340029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.340047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.340297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.340539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.340564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.340587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.344134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.353576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.354019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.354052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.354071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.354322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.354564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.354589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.354605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.358150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.367568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.367975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.368007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.368026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.368277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.368519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.368544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.368560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.372104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.381520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.381960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.381993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.382011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.382248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.382502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.382528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.382544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.386090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.395514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.395973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.396006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.396025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.396273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.358 [2024-11-02 14:51:34.396515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.358 [2024-11-02 14:51:34.396540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.358 [2024-11-02 14:51:34.396556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.358 [2024-11-02 14:51:34.400100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.358 [2024-11-02 14:51:34.409638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.358 [2024-11-02 14:51:34.410046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.358 [2024-11-02 14:51:34.410079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.358 [2024-11-02 14:51:34.410097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.358 [2024-11-02 14:51:34.410372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.410616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.410641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.410657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.414250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.423552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.423981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.424015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.424033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.424283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.424525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.424550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.424567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.428112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.437536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.437964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.437997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.438015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.438252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.438511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.438537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.438554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.442098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.451537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.451964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.451998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.452019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.452269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.452511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.452536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.452552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.456096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.465535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.465947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.465980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.465999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.466239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.466493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.466518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.466536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.470083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.479350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.479798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.479832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.479851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.480090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.480344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.480370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.480387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.483940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.493149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.493604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.493638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.493656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.493895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.494138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.494164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.494179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.618 [2024-11-02 14:51:34.497735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.618 [2024-11-02 14:51:34.507149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.618 [2024-11-02 14:51:34.507569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.618 [2024-11-02 14:51:34.507603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.618 [2024-11-02 14:51:34.507621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.618 [2024-11-02 14:51:34.507859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.618 [2024-11-02 14:51:34.508102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.618 [2024-11-02 14:51:34.508127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.618 [2024-11-02 14:51:34.508144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.511703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.521115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.521550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.521583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.521602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.521839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.522080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.522105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.522120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.525683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.535101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.535518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.535551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.535575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.535812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.536054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.536078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.536095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.539650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.549078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.549494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.549527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.549546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.549783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.550025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.550050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.550066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.553624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.563033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.563467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.563500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.563518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.563756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.563997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.564022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.564038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.567592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.577004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.577443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.577476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.577494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.577731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.577973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.578004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.578021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.581576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.591022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.591438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.591472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.591491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.591729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.591971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.591996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.592013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.595570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.604982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.605420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.605452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.605471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.605708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.605950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.605974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.605991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.609548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.618956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.619379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.619413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.619432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.619670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.619911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.619937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.619952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.623512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.632930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.633351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.633384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.633403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.633640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.633881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.633906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.633922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.637479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.646907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.619 [2024-11-02 14:51:34.647345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.619 [2024-11-02 14:51:34.647379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.619 [2024-11-02 14:51:34.647397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.619 [2024-11-02 14:51:34.647635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.619 [2024-11-02 14:51:34.647877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.619 [2024-11-02 14:51:34.647902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.619 [2024-11-02 14:51:34.647919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.619 [2024-11-02 14:51:34.651473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.619 [2024-11-02 14:51:34.660911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.620 [2024-11-02 14:51:34.661339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.620 [2024-11-02 14:51:34.661372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.620 [2024-11-02 14:51:34.661391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.620 [2024-11-02 14:51:34.661629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.620 [2024-11-02 14:51:34.661870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.620 [2024-11-02 14:51:34.661895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.620 [2024-11-02 14:51:34.661911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.620 [2024-11-02 14:51:34.665467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.674878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.675309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.675343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.675367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.675607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.675874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.675901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.675917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.679532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.688736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.689161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.689195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.689215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.689470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.689714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.689739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.689755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.693309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.702721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.703165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.703199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.703218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.703468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.703710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.703735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.703752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.707302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.716715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.717148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.717181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.717202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.717451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.717695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.717727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.717745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.721296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.730712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.731141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.731174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.731193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.731440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.731684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.731708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.731725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.735277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.744698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.745104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.745136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.745155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.745404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.745659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.745684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.745700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.749247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.758663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.759098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.759132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.759150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.759401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.879 [2024-11-02 14:51:34.759642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.879 [2024-11-02 14:51:34.759667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.879 [2024-11-02 14:51:34.759683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.879 [2024-11-02 14:51:34.763228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.879 [2024-11-02 14:51:34.772645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.879 [2024-11-02 14:51:34.773056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.879 [2024-11-02 14:51:34.773089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.879 [2024-11-02 14:51:34.773108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.879 [2024-11-02 14:51:34.773359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.773602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.773627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.773643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.777186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.786605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.787038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.787070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.787089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.787338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.787580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.787606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.787622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.791171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.800587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.800995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.801027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.801045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.801296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.801539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.801564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.801580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.805125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.814542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.814969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.815002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.815020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.815280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.815522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.815547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.815563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.819108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.828526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.828954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.828986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.829005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.829242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.829495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.829521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.829536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.833078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.842499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.842936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.842976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.842995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.843232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.843484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.843510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.843526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.847088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.856513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.856918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.856950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.856969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.857208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.857459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.857484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.857507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.861054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.870481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.870891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.870925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.870944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.871182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.871436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.871461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.871476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.875019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.884443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.884854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.884887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.884906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.885144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.885397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.885421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.885436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.888983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.898426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.898852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.898885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.898904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.899141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.899395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.880 [2024-11-02 14:51:34.899421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.880 [2024-11-02 14:51:34.899437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.880 [2024-11-02 14:51:34.902984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.880 [2024-11-02 14:51:34.912436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.880 [2024-11-02 14:51:34.912885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.880 [2024-11-02 14:51:34.912923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.880 [2024-11-02 14:51:34.912942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.880 [2024-11-02 14:51:34.913180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.880 [2024-11-02 14:51:34.913432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.881 [2024-11-02 14:51:34.913457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.881 [2024-11-02 14:51:34.913473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.881 [2024-11-02 14:51:34.917020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.881 [2024-11-02 14:51:34.926441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.881 [2024-11-02 14:51:34.926844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.881 [2024-11-02 14:51:34.926877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:42.881 [2024-11-02 14:51:34.926896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:42.881 [2024-11-02 14:51:34.927134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:42.881 [2024-11-02 14:51:34.927389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.881 [2024-11-02 14:51:34.927415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.881 [2024-11-02 14:51:34.927432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.881 [2024-11-02 14:51:34.931076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:34.940499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:34.940937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:34.940970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:34.940989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:34.941227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:34.941480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:34.941505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:34.941521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:34.945066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 5286.50 IOPS, 20.65 MiB/s [2024-11-02T13:51:35.195Z] [2024-11-02 14:51:34.955393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:34.955797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:34.955831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:34.955849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:34.956087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:34.956345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:34.956370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:34.956386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:34.959936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:34.969362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:34.969794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:34.969827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:34.969846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:34.970085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:34.970337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:34.970364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:34.970382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:34.973926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:34.983379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:34.983783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:34.983816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:34.983835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:34.984074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:34.984327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:34.984352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:34.984368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:34.987912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:34.997345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:34.997750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:34.997783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:34.997802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:34.998040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:34.998294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:34.998325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:34.998342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:35.001891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:35.011311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:35.011721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:35.011753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:35.011772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:35.012009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:35.012251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:35.012287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:35.012304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.140 [2024-11-02 14:51:35.015850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.140 [2024-11-02 14:51:35.025281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.140 [2024-11-02 14:51:35.025719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.140 [2024-11-02 14:51:35.025751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.140 [2024-11-02 14:51:35.025769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.140 [2024-11-02 14:51:35.026007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.140 [2024-11-02 14:51:35.026248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.140 [2024-11-02 14:51:35.026284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.140 [2024-11-02 14:51:35.026301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.029843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.039291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.039727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.039762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.039781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.040018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.040273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.040299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.040314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.043858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.053298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.053733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.053765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.053790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.054028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.054280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.054306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.054321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.057868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.067287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.067725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.067757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.067776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.068013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.068254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.068290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.068306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.071851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.081269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.081700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.081733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.081752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.081990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.082233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.082269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.082288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.085835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.095269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.095694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.095726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.095745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.095982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.096231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.096268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.096288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.099833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.109118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.109531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.109564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.109583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.109821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.110063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.110088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.110103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.113660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.123081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.123494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.123539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.123558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.123797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.124039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.124065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.124082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.127637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.137088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.137505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.137545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.137563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.137801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.138044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.138068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.138083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.141639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.151080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.151498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.151530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.151549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.151787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.152029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.152053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.152069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.155625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.165052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.141 [2024-11-02 14:51:35.165492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.141 [2024-11-02 14:51:35.165525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.141 [2024-11-02 14:51:35.165543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.141 [2024-11-02 14:51:35.165782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.141 [2024-11-02 14:51:35.166024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.141 [2024-11-02 14:51:35.166049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.141 [2024-11-02 14:51:35.166065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.141 [2024-11-02 14:51:35.169632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.141 [2024-11-02 14:51:35.179052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.142 [2024-11-02 14:51:35.179467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.142 [2024-11-02 14:51:35.179500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.142 [2024-11-02 14:51:35.179518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.142 [2024-11-02 14:51:35.179756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.142 [2024-11-02 14:51:35.180000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.142 [2024-11-02 14:51:35.180025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.142 [2024-11-02 14:51:35.180040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.142 [2024-11-02 14:51:35.183601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.142 [2024-11-02 14:51:35.193153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.142 [2024-11-02 14:51:35.193584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.142 [2024-11-02 14:51:35.193637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.142 [2024-11-02 14:51:35.193666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.142 [2024-11-02 14:51:35.193906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.401 [2024-11-02 14:51:35.194148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.401 [2024-11-02 14:51:35.194173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.401 [2024-11-02 14:51:35.194189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.401 [2024-11-02 14:51:35.197815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.401 [2024-11-02 14:51:35.207120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.401 [2024-11-02 14:51:35.207565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.401 [2024-11-02 14:51:35.207598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.401 [2024-11-02 14:51:35.207617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.401 [2024-11-02 14:51:35.207855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.401 [2024-11-02 14:51:35.208098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.401 [2024-11-02 14:51:35.208123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.401 [2024-11-02 14:51:35.208139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.401 [2024-11-02 14:51:35.211705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.221126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.221545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.221578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.221597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.221836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.222077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.222102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.222118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.225694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.235138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.235529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.235573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.235591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.235829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.236071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.236103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.236119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.239686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.249131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.249553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.249587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.249606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.249844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.250087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.250112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.250127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.253684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.263103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.263552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.263585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.263604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.263841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.264083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.264108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.264124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.267677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.277091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.277523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.277555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.277574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.277812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.278055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.278080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.278096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.281653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.291069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.291494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.291528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.291546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.291784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.292027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.292052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.292068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.295692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.304901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.305332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.305366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.305385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.305623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.305863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.305888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.305903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.309461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.318879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.319283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.319316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.319335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.319572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.319813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.319838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.319855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.323416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.332830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.333253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.333293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.333311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.333555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.333796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.333821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.402 [2024-11-02 14:51:35.333837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.402 [2024-11-02 14:51:35.337394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.402 [2024-11-02 14:51:35.346805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.402 [2024-11-02 14:51:35.347218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.402 [2024-11-02 14:51:35.347251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.402 [2024-11-02 14:51:35.347280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.402 [2024-11-02 14:51:35.347520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.402 [2024-11-02 14:51:35.347774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.402 [2024-11-02 14:51:35.347800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.347816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.351369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.360783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.361227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.361266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.361286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.361524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.361766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.361792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.361808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.365362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.374772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.375199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.375231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.375250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.375498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.375739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.375764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.375787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.379340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.388750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.389180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.389212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.389230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.389483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.389725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.389750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.389767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.393316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.402726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.403154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.403186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.403205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.403454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.403695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.403721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.403736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.407290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.416706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.417116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.417148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.417166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.417414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.417656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.417681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.417697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.421241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.430659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.431077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.431117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.431136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.431387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.431629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.431655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.431670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.435215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.403 [2024-11-02 14:51:35.444633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.403 [2024-11-02 14:51:35.445062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.403 [2024-11-02 14:51:35.445095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.403 [2024-11-02 14:51:35.445114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.403 [2024-11-02 14:51:35.445367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.403 [2024-11-02 14:51:35.445611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.403 [2024-11-02 14:51:35.445637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.403 [2024-11-02 14:51:35.445653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.403 [2024-11-02 14:51:35.449213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.458640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.459053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.459086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.459106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.459361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.459628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.459653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.459670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.463277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.472488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.472916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.472950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.472970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.473209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.473469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.473496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.473513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.477059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.486480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.486918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.486951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.486970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.487207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.487458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.487484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.487501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.491049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.500313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.500756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.500789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.500808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.501054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.501317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.501344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.501360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.504905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.514120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.514536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.514569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.514587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.514824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.515066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.515091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.515107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.518672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.528088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.528526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.528558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.528576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.528814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.529055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.529080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.529096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.532655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.542066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.542479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.542512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.542530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.542768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.543009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.543034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.543050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.546606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.556034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.556483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.663 [2024-11-02 14:51:35.556516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.663 [2024-11-02 14:51:35.556535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.663 [2024-11-02 14:51:35.556773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.663 [2024-11-02 14:51:35.557016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.663 [2024-11-02 14:51:35.557041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.663 [2024-11-02 14:51:35.557057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.663 [2024-11-02 14:51:35.560611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.663 [2024-11-02 14:51:35.570021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.663 [2024-11-02 14:51:35.570432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.570465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.570489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.570727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.570968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.570993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.571009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.574564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.583974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.584413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.584446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.584464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.584702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.584945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.584971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.584987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.588545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.597959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.598398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.598432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.598451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.598690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.598933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.598958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.598974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.602533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.611943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.612355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.612389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.612408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.612647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.612890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.612922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.612940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.616496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.625911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.626348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.626380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.626399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.626636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.626879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.626903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.626919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.630476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.639886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.640328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.640361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.640380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.640617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.640859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.640883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.640899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.644455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.653895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.654332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.654365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.654384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.654621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.654863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.654888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.654904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.658463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.667883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.668311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.668344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.668363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.668600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.668842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.668867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.668883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.672442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.681863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.682289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.682322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.682341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.682579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.682820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.682846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.682861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.686421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.695846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.696265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.696297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.664 [2024-11-02 14:51:35.696315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.664 [2024-11-02 14:51:35.696553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.664 [2024-11-02 14:51:35.696795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.664 [2024-11-02 14:51:35.696821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.664 [2024-11-02 14:51:35.696836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.664 [2024-11-02 14:51:35.700391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.664 [2024-11-02 14:51:35.709809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.664 [2024-11-02 14:51:35.710242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.664 [2024-11-02 14:51:35.710285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.665 [2024-11-02 14:51:35.710310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.665 [2024-11-02 14:51:35.710549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.665 [2024-11-02 14:51:35.710790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.665 [2024-11-02 14:51:35.710816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.665 [2024-11-02 14:51:35.710832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.665 [2024-11-02 14:51:35.714458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.723910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.724342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.724378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.724397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.724636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.724880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.724906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.724924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.728488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.737919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.738337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.738372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.738391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.738630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.738874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.738899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.738915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.742488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.751943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.752382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.752416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.752435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.752675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.752918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.752944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.752967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.756533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.765955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.766387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.766421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.766440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.766678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.766919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.766944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.766960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.770520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.779936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.780379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.780412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.780431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.780669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.780910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.780936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.780952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.784514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.793938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.794347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.794380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.794399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.794637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.794879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.794904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.794921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.798481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.807901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.808343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.808377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.808396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.808635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.808877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.808902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.808918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.812481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.821900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.822331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.822364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.822382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.822621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.822861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.822886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.822902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.826456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.835873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.836308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.836342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.836360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.836599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.836843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.836867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.836883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.924 [2024-11-02 14:51:35.840446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.924 [2024-11-02 14:51:35.849880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.924 [2024-11-02 14:51:35.850322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-02 14:51:35.850356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.924 [2024-11-02 14:51:35.850374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.924 [2024-11-02 14:51:35.850618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.924 [2024-11-02 14:51:35.850859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.924 [2024-11-02 14:51:35.850885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.924 [2024-11-02 14:51:35.850901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.854463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.863882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.864309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.864342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.864360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.864598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.864840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.864865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.864880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.868444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.877895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.878330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.878364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.878383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.878621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.878863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.878888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.878903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.882461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.891876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.892319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.892352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.892370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.892608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.892849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.892873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.892896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.896458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.905870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.906306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.906340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.906358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.906596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.906837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.906862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.906878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.910438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.919853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.920294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.920328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.920346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.920584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.920827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.920852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.920868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.924426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.933844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.934287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.934320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.934339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.934576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.934817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.934842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.934858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.938413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.947832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.948237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.948285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.948306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.948544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.948789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.948814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.948829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.952409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 4229.20 IOPS, 16.52 MiB/s [2024-11-02T13:51:35.980Z] [2024-11-02 14:51:35.961704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.962134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.962167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.962186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.962437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.962680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.962706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.962722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.925 [2024-11-02 14:51:35.966275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.925 [2024-11-02 14:51:35.975810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.925 [2024-11-02 14:51:35.976222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-02 14:51:35.976265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:43.925 [2024-11-02 14:51:35.976287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:43.925 [2024-11-02 14:51:35.976525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:43.925 [2024-11-02 14:51:35.976770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.925 [2024-11-02 14:51:35.976796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.925 [2024-11-02 14:51:35.976812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.184 [2024-11-02 14:51:35.980466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.184 [2024-11-02 14:51:35.989783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.184 [2024-11-02 14:51:35.990212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.184 [2024-11-02 14:51:35.990246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.184 [2024-11-02 14:51:35.990279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.184 [2024-11-02 14:51:35.990519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.184 [2024-11-02 14:51:35.990769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.184 [2024-11-02 14:51:35.990795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.184 [2024-11-02 14:51:35.990811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.184 [2024-11-02 14:51:35.994376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.184 [2024-11-02 14:51:36.003806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.184 [2024-11-02 14:51:36.004234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.184 [2024-11-02 14:51:36.004275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.184 [2024-11-02 14:51:36.004296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.004535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.004776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.004801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.004817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.008372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.017790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.018214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.018246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.018279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.018519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.018761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.018785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.018802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.022361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.031781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.032211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.032244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.032276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.032515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.032759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.032783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.032799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.036362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.045802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.046231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.046274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.046295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.046532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.046774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.046799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.046815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.050393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.059646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.060093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.060126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.060144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.060394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.060636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.060661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.060677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.064289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.073522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.074024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.074056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.074074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.074326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.074570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.074594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.074610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.078161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.087387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.087824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.087856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.087881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.088119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.088372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.088397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.088413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.091959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.101372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.101798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.101831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.101849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.102087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.102341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.102366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.102381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.105923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.115347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.115774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.115806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.115825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.116062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.116320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.116346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.116362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.119919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.129358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.129801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.129833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.129852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.130091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.185 [2024-11-02 14:51:36.130346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.185 [2024-11-02 14:51:36.130377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.185 [2024-11-02 14:51:36.130394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.185 [2024-11-02 14:51:36.133946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.185 [2024-11-02 14:51:36.143196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.185 [2024-11-02 14:51:36.143653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.185 [2024-11-02 14:51:36.143686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.185 [2024-11-02 14:51:36.143704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.185 [2024-11-02 14:51:36.143941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.144183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.144208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.144224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.147788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.157029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.157459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.157492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.157510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.157749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.157991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.158016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.158032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.161590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.171027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.171447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.171481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.171499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.171737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.171978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.172004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.172020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.175583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.185019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.185441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.185473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.185492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.185729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.185970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.185995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.186011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.189581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.198997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.199409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.199443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.199462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.199699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.199940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.199965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.199981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.203540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.212959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.213373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.213406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.213425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.213663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.213904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.213930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.213946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.217508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.186 [2024-11-02 14:51:36.226944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.186 [2024-11-02 14:51:36.227375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.186 [2024-11-02 14:51:36.227409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.186 [2024-11-02 14:51:36.227437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.186 [2024-11-02 14:51:36.227676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.186 [2024-11-02 14:51:36.227920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.186 [2024-11-02 14:51:36.227946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.186 [2024-11-02 14:51:36.227961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.186 [2024-11-02 14:51:36.231527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.240974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.241426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.241460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.241479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.241718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.241959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.241985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.242001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.245649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.254891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.255324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.255357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.255376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.255614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.255856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.255882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.255898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.259476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.268924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.269359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.269393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.269412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.269649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.269892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.269923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.269940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.273505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.282938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.283372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.283405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.283423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.283662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.283904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.283929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.283945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.287504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.296961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.297450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.297484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.297502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.297740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.297983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.298007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.298023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.301590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.310822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.311222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.311253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.311284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.311521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.311764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.311789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.311804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.315366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.324807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.325251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.325290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.325308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.325546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.446 [2024-11-02 14:51:36.325788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.446 [2024-11-02 14:51:36.325812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.446 [2024-11-02 14:51:36.325828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.446 [2024-11-02 14:51:36.329387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.446 [2024-11-02 14:51:36.338809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.446 [2024-11-02 14:51:36.339233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.446 [2024-11-02 14:51:36.339274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.446 [2024-11-02 14:51:36.339295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.446 [2024-11-02 14:51:36.339532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.339774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.339799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.339815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.343371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.352827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.353234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.353276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.353297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.353535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.353778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.353802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.353817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.357380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.366820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.367245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.367288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.367308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.367552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.367795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.367820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.367836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.371391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.380818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.381217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.381250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.381281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.381520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.381761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.381787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.381803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.385363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.394796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.395195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.395228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.395246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.395496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.395739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.395764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.395780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.399335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.408764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.409189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.409222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.409240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.409489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.409730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.409755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.409777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.413337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.422760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.423199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.423231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.423249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.423502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.423744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.423770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.423786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.427341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.436762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.437197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.437229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.437248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.437498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.437739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.437764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.437780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1531684 Killed "${NVMF_APP[@]}" "$@" 00:35:44.447 [2024-11-02 14:51:36.441335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1532720 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1532720 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1532720 ']' 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:44.447 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.447 [2024-11-02 14:51:36.450770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.451209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.451241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.447 [2024-11-02 14:51:36.451268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.447 [2024-11-02 14:51:36.451509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.447 [2024-11-02 14:51:36.451764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.447 [2024-11-02 14:51:36.451789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.447 [2024-11-02 14:51:36.451805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.447 [2024-11-02 14:51:36.455359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.447 [2024-11-02 14:51:36.464782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.447 [2024-11-02 14:51:36.465223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.447 [2024-11-02 14:51:36.465267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.448 [2024-11-02 14:51:36.465288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.448 [2024-11-02 14:51:36.465526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.448 [2024-11-02 14:51:36.465769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.448 [2024-11-02 14:51:36.465793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.448 [2024-11-02 14:51:36.465809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.448 [2024-11-02 14:51:36.469368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.448 [2024-11-02 14:51:36.478597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.448 [2024-11-02 14:51:36.479003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.448 [2024-11-02 14:51:36.479030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.448 [2024-11-02 14:51:36.479047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.448 [2024-11-02 14:51:36.479306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.448 [2024-11-02 14:51:36.479511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.448 [2024-11-02 14:51:36.479532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.448 [2024-11-02 14:51:36.479546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.448 [2024-11-02 14:51:36.482548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.448 [2024-11-02 14:51:36.491777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.448 [2024-11-02 14:51:36.492176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.448 [2024-11-02 14:51:36.492204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.448 [2024-11-02 14:51:36.492220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.448 [2024-11-02 14:51:36.492483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.448 [2024-11-02 14:51:36.492698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.448 [2024-11-02 14:51:36.492718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.448 [2024-11-02 14:51:36.492731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.448 [2024-11-02 14:51:36.495803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.448 [2024-11-02 14:51:36.496691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:44.448 [2024-11-02 14:51:36.496747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.707 [2024-11-02 14:51:36.505310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.707 [2024-11-02 14:51:36.505732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.707 [2024-11-02 14:51:36.505763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.707 [2024-11-02 14:51:36.505779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.707 [2024-11-02 14:51:36.506028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.707 [2024-11-02 14:51:36.506221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.707 [2024-11-02 14:51:36.506265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.707 [2024-11-02 14:51:36.506283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.509526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.518606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.518998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.519028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.519044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.519306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.519511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.519546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.519560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.522524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.531895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.532349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.532385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.532402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.532641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.532849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.532869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.532881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.535812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.545680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.546089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.546121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.546140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.546411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.546653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.546677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.546693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.550212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.559523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.559984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.560013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.560030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.560298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.560517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.560537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.560550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.564056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.567376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:44.708 [2024-11-02 14:51:36.573381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.573860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.573890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.573907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.574144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.574401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.574423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.574438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.577917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.587192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.587801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.587840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.587862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.588108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.588369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.588390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.588407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.591895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.601112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.601586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.601619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.601638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.601876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.602119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.602143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.602160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.605656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.614912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.615336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.615367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.615385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.615624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.615868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.615893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.615918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.619416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.628678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.629282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.629335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.629356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.629633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.629882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.629907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.629927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.633447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.708 [2024-11-02 14:51:36.642477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.708 [2024-11-02 14:51:36.642934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.708 [2024-11-02 14:51:36.642966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.708 [2024-11-02 14:51:36.642985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.708 [2024-11-02 14:51:36.643223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.708 [2024-11-02 14:51:36.643468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.708 [2024-11-02 14:51:36.643490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.708 [2024-11-02 14:51:36.643505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.708 [2024-11-02 14:51:36.646995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.656319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.656726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.656768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.656786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.657032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.657300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.657321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.657335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.659596] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.709 [2024-11-02 14:51:36.659633] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.709 [2024-11-02 14:51:36.659656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.709 [2024-11-02 14:51:36.659679] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.709 [2024-11-02 14:51:36.659692] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.709 [2024-11-02 14:51:36.659750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.709 [2024-11-02 14:51:36.659869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.709 [2024-11-02 14:51:36.659872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.709 [2024-11-02 14:51:36.660574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.669768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.670408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.670460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.670481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.670731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.670942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.670964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.670980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.674150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.683315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.683933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.683982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.684004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.684230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.684479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.684503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.684520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.687695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.696817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.697385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.697437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.697458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.697710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.697922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.697943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.697969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.701094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.710535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.711097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.711145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.711166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.711414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.711643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.711665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.711682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.714826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.724068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.724688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.724741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.724763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.725014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.725265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.725288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.725307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.728601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.737743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.738228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.738282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.738315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.738563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.738790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.738811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.738829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.741946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.709 [2024-11-02 14:51:36.751260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.709 [2024-11-02 14:51:36.751645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.709 [2024-11-02 14:51:36.751693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.709 [2024-11-02 14:51:36.751710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.709 [2024-11-02 14:51:36.751938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.709 [2024-11-02 14:51:36.752159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.709 [2024-11-02 14:51:36.752180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.709 [2024-11-02 14:51:36.752194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.709 [2024-11-02 14:51:36.755419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 [2024-11-02 14:51:36.764825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.765246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.765284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.765302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.765526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.765756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.765779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.765794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 [2024-11-02 14:51:36.769106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.969 [2024-11-02 14:51:36.778418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.778845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.778885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.778901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.779144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.779390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.779413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.779428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 [2024-11-02 14:51:36.782625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.969 [2024-11-02 14:51:36.791926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.792361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.792391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.792409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.792637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.792848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.792869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.792883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 [2024-11-02 14:51:36.794952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.969 [2024-11-02 14:51:36.796127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 [2024-11-02 14:51:36.805758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.806195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.806227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.806248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.806502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.806755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.806779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.806795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 [2024-11-02 14:51:36.810154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.969 [2024-11-02 14:51:36.819231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.819656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.819695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.819712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.819955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.820161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.820182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.820202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 [2024-11-02 14:51:36.823493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.969 [2024-11-02 14:51:36.832778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.969 [2024-11-02 14:51:36.833351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.969 [2024-11-02 14:51:36.833404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.969 [2024-11-02 14:51:36.833425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.969 [2024-11-02 14:51:36.833676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.969 [2024-11-02 14:51:36.833885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.969 [2024-11-02 14:51:36.833906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.969 [2024-11-02 14:51:36.833924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.969 Malloc0 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.969 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.970 [2024-11-02 14:51:36.837179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.970 [2024-11-02 14:51:36.846388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.970 [2024-11-02 14:51:36.846820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.970 [2024-11-02 14:51:36.846849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410090 with addr=10.0.0.2, port=4420 00:35:44.970 [2024-11-02 14:51:36.846870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410090 is same with the state(6) to be set 00:35:44.970 [2024-11-02 14:51:36.847112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410090 (9): Bad file descriptor 00:35:44.970 [2024-11-02 14:51:36.847363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.970 [2024-11-02 14:51:36.847386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.970 [2024-11-02 14:51:36.847401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.970 [2024-11-02 14:51:36.850618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.970 [2024-11-02 14:51:36.856318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.970 [2024-11-02 14:51:36.860025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.970 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1531943 00:35:44.970 [2024-11-02 14:51:36.934970] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:45.904 3552.17 IOPS, 13.88 MiB/s [2024-11-02T13:51:39.334Z] 4301.86 IOPS, 16.80 MiB/s [2024-11-02T13:51:40.269Z] 4864.75 IOPS, 19.00 MiB/s [2024-11-02T13:51:41.203Z] 5303.56 IOPS, 20.72 MiB/s [2024-11-02T13:51:42.138Z] 5612.60 IOPS, 21.92 MiB/s [2024-11-02T13:51:43.084Z] 5898.09 IOPS, 23.04 MiB/s [2024-11-02T13:51:44.019Z] 6130.17 IOPS, 23.95 MiB/s [2024-11-02T13:51:45.393Z] 6341.92 IOPS, 24.77 MiB/s [2024-11-02T13:51:46.328Z] 6523.64 IOPS, 25.48 MiB/s [2024-11-02T13:51:46.328Z] 6678.33 IOPS, 26.09 MiB/s 00:35:54.273 Latency(us) 00:35:54.273 [2024-11-02T13:51:46.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.273 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:54.273 Verification LBA range: start 0x0 length 0x4000 00:35:54.273 Nvme1n1 : 15.01 6679.44 26.09 8627.56 0.00 8336.07 831.34 18932.62 00:35:54.273 [2024-11-02T13:51:46.328Z] =================================================================================================================== 00:35:54.273 [2024-11-02T13:51:46.328Z] Total : 6679.44 26.09 8627.56 0.00 8336.07 831.34 18932.62 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:54.273 rmmod nvme_tcp 00:35:54.273 rmmod nvme_fabrics 00:35:54.273 rmmod nvme_keyring 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 1532720 ']' 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 1532720 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1532720 ']' 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1532720 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1532720 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1532720' 00:35:54.273 killing process with pid 1532720 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1532720 00:35:54.273 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1532720 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.532 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.065 00:35:57.065 real 0m22.869s 00:35:57.065 user 0m56.871s 00:35:57.065 sys 0m5.838s 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.065 ************************************ 00:35:57.065 END TEST nvmf_bdevperf 00:35:57.065 ************************************ 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.065 ************************************ 00:35:57.065 START TEST nvmf_target_disconnect 00:35:57.065 ************************************ 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:57.065 * Looking for test storage... 00:35:57.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.065 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.066 --rc genhtml_branch_coverage=1 00:35:57.066 --rc genhtml_function_coverage=1 00:35:57.066 --rc genhtml_legend=1 00:35:57.066 --rc geninfo_all_blocks=1 00:35:57.066 --rc geninfo_unexecuted_blocks=1 00:35:57.066 00:35:57.066 ' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.066 --rc genhtml_branch_coverage=1 00:35:57.066 --rc genhtml_function_coverage=1 00:35:57.066 --rc genhtml_legend=1 00:35:57.066 --rc geninfo_all_blocks=1 00:35:57.066 --rc geninfo_unexecuted_blocks=1 00:35:57.066 00:35:57.066 ' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.066 --rc genhtml_branch_coverage=1 00:35:57.066 --rc genhtml_function_coverage=1 00:35:57.066 --rc genhtml_legend=1 00:35:57.066 --rc geninfo_all_blocks=1 00:35:57.066 --rc geninfo_unexecuted_blocks=1 00:35:57.066 00:35:57.066 ' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.066 --rc genhtml_branch_coverage=1 00:35:57.066 --rc genhtml_function_coverage=1 00:35:57.066 --rc genhtml_legend=1 00:35:57.066 --rc geninfo_all_blocks=1 00:35:57.066 --rc geninfo_unexecuted_blocks=1 00:35:57.066 00:35:57.066 ' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:57.066 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.966 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:58.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:58.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:58.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:58.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:35:58.967 00:35:58.967 --- 10.0.0.2 ping statistics --- 00:35:58.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.967 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:35:58.967 00:35:58.967 --- 10.0.0.1 ping statistics --- 00:35:58.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.967 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:58.967 ************************************ 00:35:58.967 START TEST nvmf_target_disconnect_tc1 00:35:58.967 ************************************ 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:58.967 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:58.968 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:58.968 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:58.968 14:51:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.226 [2024-11-02 14:51:51.021980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.226 [2024-11-02 14:51:51.022053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3f220 with addr=10.0.0.2, port=4420 00:35:59.226 [2024-11-02 14:51:51.022091] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:59.226 [2024-11-02 14:51:51.022130] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:59.226 [2024-11-02 14:51:51.022146] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:59.226 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:59.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:59.226 Initializing NVMe Controllers 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:59.226 00:35:59.226 real 0m0.096s 00:35:59.226 user 0m0.042s 00:35:59.226 sys 0m0.054s 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:59.226 ************************************ 00:35:59.226 END TEST nvmf_target_disconnect_tc1 00:35:59.226 ************************************ 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.226 ************************************ 00:35:59.226 START TEST nvmf_target_disconnect_tc2 00:35:59.226 ************************************ 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1535789 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1535789 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1535789 ']' 00:35:59.226 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.227 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:59.227 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.227 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:59.227 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.227 [2024-11-02 14:51:51.143923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:59.227 [2024-11-02 14:51:51.144001] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.227 [2024-11-02 14:51:51.213440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:59.485 [2024-11-02 14:51:51.300159] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.485 [2024-11-02 14:51:51.300212] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.485 [2024-11-02 14:51:51.300234] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.485 [2024-11-02 14:51:51.300245] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.485 [2024-11-02 14:51:51.300261] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.485 [2024-11-02 14:51:51.300380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:35:59.485 [2024-11-02 14:51:51.300449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:59.485 [2024-11-02 14:51:51.300515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:35:59.485 [2024-11-02 14:51:51.300518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:59.485 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 Malloc0 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 [2024-11-02 14:51:51.481735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 [2024-11-02 14:51:51.510014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1535906 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:59.486 14:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.039 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1535789 00:36:02.040 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 [2024-11-02 14:51:53.536791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 [2024-11-02 14:51:53.537131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 [2024-11-02 14:51:53.537454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Write completed with error (sct=0, sc=8) 00:36:02.040 starting I/O failed 00:36:02.040 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Write completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 Read completed with error (sct=0, sc=8) 00:36:02.041 starting I/O failed 00:36:02.041 [2024-11-02 14:51:53.537809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.041 [2024-11-02 14:51:53.538048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.538213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.538391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.538554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.538753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.538931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.538958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.539132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.539158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.539285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.539318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.539452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.539480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.539647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.539696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.539906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.539935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.540054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.540097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.540274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.540330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.540461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.540488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.540656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.540711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.540877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.540906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.541871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.541900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.542857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.542885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.543007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.543033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.543188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.543214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.543358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.543385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.041 [2024-11-02 14:51:53.543506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.041 [2024-11-02 14:51:53.543533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.041 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.543680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.543706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.543886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.543913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.544055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.544082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.544249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.544296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.544432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.544473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.544640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.544686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.544877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.544921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.545093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.545121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.545300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.545342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.545530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.545559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.545766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.545793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.545949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.545990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.546141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.546167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.546347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.546392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.546559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.546586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.546774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.546801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.546945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.546971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.547143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.547169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.547336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.547376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.547531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.547558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.547752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.547804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.548058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.548084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.548280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.548318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.548472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.548497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.548832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.548897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.549142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.549194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.549382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.549410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.549572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.549599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.549745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.549772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.549939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.549982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.550171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.550197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.550323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.550350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.550510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.550551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.550770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.550796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.550990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.551044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.551215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.551241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.551381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.551407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.551591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.551618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.042 qpair failed and we were unable to recover it. 00:36:02.042 [2024-11-02 14:51:53.551810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.042 [2024-11-02 14:51:53.551863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.552173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.552206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.552364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.552390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.552547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.552574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.552790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.552852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.553138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.553192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.553342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.553370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.553522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.553558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.553791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.553846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.554026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.554053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.554206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.554233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.554395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.554422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.554567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.554597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.554783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.554812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.555103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.555155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.555310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.555337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.555486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.555524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.555679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.555708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.556968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.556995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.557167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.557197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.557357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.557384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.557512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.557565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.557690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.557717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.557946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.557976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.558139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.558169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.558328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.558356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.558501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.558540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.558693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.558733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.558912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.558939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.559111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.559143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.559299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.559325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.559475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.559502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.559778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.559831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.560021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.043 [2024-11-02 14:51:53.560050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.043 qpair failed and we were unable to recover it. 00:36:02.043 [2024-11-02 14:51:53.560252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.560285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.560439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.560465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.560647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.560690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.560859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.560889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.561079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.561290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.561471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.561650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.561844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.561992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.562175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.562351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.562523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.562698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.562879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.562907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.563057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.563100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.563273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.563302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.563449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.563474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.563642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.563672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.563868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.563895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.564067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.564097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.564299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.564327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.564505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.564533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.564688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.564715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.564870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.564898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.565049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.565077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.565272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.565300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.565469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.565499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.565641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.565668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.565822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.565850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.566000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.566028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.566202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.566229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.566412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.044 [2024-11-02 14:51:53.566443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.044 qpair failed and we were unable to recover it. 00:36:02.044 [2024-11-02 14:51:53.566633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.566664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.566829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.566856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.567060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.567247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.567436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.567609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.567821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.567986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.568134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.568302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.568471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.568676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.568855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.568882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.569959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.569987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.570164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.570343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.570512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.570690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.570871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.570998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.571200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.571410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.571588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.571762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.571955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.571984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.572167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.572216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.572424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.572452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.572624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.572651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.572799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.572842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.573014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.573041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.573155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.573180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.573364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.573391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.573567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.573594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.045 qpair failed and we were unable to recover it. 00:36:02.045 [2024-11-02 14:51:53.573767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.045 [2024-11-02 14:51:53.573794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.573945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.573972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.574162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.574205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.574413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.574441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.574602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.574629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.574779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.574805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.574963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.574991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.575132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.575158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.575335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.575362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.575519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.575546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.575698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.575724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.575855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.575879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.576887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.576911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.577062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.577089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.577269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.577300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.577479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.577505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.577654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.577681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.577837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.577864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.578874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.578901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.579924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.579951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.580101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.580128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.580265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.580290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.580505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.580532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.580686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.580712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.046 qpair failed and we were unable to recover it. 00:36:02.046 [2024-11-02 14:51:53.580865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.046 [2024-11-02 14:51:53.580891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.581058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.581098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.581284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.581315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.581469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.581497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.581618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.581643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.581793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.581820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.582959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.582987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.583156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.583187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.583338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.583364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.583521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.583549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.583699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.583726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.583880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.583909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.584943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.584970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.585095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.585121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.585243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.585277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.585419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.585446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.585623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.585650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.585826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.585854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.586936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.586961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.587106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.587134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.587309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.587338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.587492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.587519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.587689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.587717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.047 [2024-11-02 14:51:53.587871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.047 [2024-11-02 14:51:53.587899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.047 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.588112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.588139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.588269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.588297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.588454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.588482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.588611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.588636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.588812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.588843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.589926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.589953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.590949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.590979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.591127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.591154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.591326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.591354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.591529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.591556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.591698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.591724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.591895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.591922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.592888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.592914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.593950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.593976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.594133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.594160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.594310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.594336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.594513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.594554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.594708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.048 [2024-11-02 14:51:53.594737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.048 qpair failed and we were unable to recover it. 00:36:02.048 [2024-11-02 14:51:53.594904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.594932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.595960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.595988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.596168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.596196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.596322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.596348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.596524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.596551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.596726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.596754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.596939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.596974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.597125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.597153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.597302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.597328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.597487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.597515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.597662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.597689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.597863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.597891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.598903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.598931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.599107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.599134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.599294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.599321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.599479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.599510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.599666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.599694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.599857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.599885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.049 [2024-11-02 14:51:53.600889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.049 [2024-11-02 14:51:53.600917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.049 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.601070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.601098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.601265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.601306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.601501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.601542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.601746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.601793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.601981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.602150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.602345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.602544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.602698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.602921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.602991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.603195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.603226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.603425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.603453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.603575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.603600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.603809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.603839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.604161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.604219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.604481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.604509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.604657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.604684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.604847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.604877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.605092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.605120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.605278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.605304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.605451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.605479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.605655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.605699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.605892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.605919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.606094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.606121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.606241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.606277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.606477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.606507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.606643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.606671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.606847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.606874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.607922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.607950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.608120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.608148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.608273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.608300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.608439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.608464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.608753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.608805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-11-02 14:51:53.608960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.050 [2024-11-02 14:51:53.608989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.609157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.609184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.609336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.609362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.609514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.609557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.609718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.609748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.609924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.609951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.610102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.610129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.610271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.610310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.610505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.610563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.610862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.610917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.611124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.611152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.611316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.611342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.611581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.611608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.611759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.611804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.612073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.612127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.612270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.612314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.612460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.612490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.612656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.612686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.612889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.612916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.613067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.613112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.613314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.613346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.613527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.613554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.613673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.613699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.613955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.614175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.614351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.614502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.614678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.614876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.614902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.615054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.615080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.615229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.615265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.615426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.615451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.615575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.615618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.615807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.615837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.616051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.616082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.616238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.616277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.616474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.616501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.616631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.616672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.616885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.616935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-11-02 14:51:53.617080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.051 [2024-11-02 14:51:53.617109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.617299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.617327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.617476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.617503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.617652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.617679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.617831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.617858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.618902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.618929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.619107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.619134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.619269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.619295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.619439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.619465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.619579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.619622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.619816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.619846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.620105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.620135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.620304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.620332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.620457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.620482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.620628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.620655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.620824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.620869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.621066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.621093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.621211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.621239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.621409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.621437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.621587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.621615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.621803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.621830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.622032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.622209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.622400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.622603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.622800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.622973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.623153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.623334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.623517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.623738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.623909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.623939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.624127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.624156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.624333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.052 [2024-11-02 14:51:53.624360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-11-02 14:51:53.624540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.624567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.624717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.624745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.624867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.624891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.625924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.625948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.626096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.626123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.626279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.626311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.626482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.626526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.626654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.626679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.626827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.626853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.627027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.627054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.627210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.627237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.627402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.627429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.627609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.627640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.627813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.627843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.628861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.628887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.629037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.629065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.629217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.629244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.629412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.629447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.629599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.629626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.629802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.629829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.630013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.630040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.630180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.630209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.630381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.630407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.053 [2024-11-02 14:51:53.630551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.053 [2024-11-02 14:51:53.630581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.053 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.630754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.630781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.630923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.630950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.631123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.631152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.631324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.631351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.631509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.631536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.631713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.631740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.631922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.631948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.632935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.632960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.633110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.633307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.633488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.633666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.633824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.633996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.634199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.634405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.634609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.634790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.634942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.634984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.635135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.635161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.635351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.635379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.635564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.635591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.635739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.635767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.635896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.635922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.636102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.636292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.636504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.636647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.636840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.636986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.637013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.637163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.637189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.637338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.637365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.637532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.637562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.054 qpair failed and we were unable to recover it. 00:36:02.054 [2024-11-02 14:51:53.637730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.054 [2024-11-02 14:51:53.637759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.637935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.637962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.638111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.638155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.638366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.638394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.638539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.638566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.638752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.638778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.638932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.638959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.639144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.639172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.639326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.639352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.639509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.639535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.639682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.639710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.639859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.639904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.640064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.640093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.640266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.640294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.640446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.640473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.640625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.640652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.640799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.640827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.641030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.641292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.641472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.641660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.641838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.641986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.642186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.642436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.642590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.642793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.642969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.642996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.643196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.643226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.643386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.643413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.643539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.643566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.643739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.643767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.643915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.643941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.644970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.644997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.055 qpair failed and we were unable to recover it. 00:36:02.055 [2024-11-02 14:51:53.645118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.055 [2024-11-02 14:51:53.645144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.645276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.645304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.645430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.645456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.645608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.645635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.645810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.645837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.646958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.646984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.647133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.647160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.647353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.647383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.647563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.647590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.647708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.647734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.647854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.647881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.648058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.648085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.648252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.648305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.648459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.648486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.648637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.648664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.648805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.648832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.649005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.649032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.649188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.649215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.649377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.649405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.649577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.649604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.649812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.649839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.650961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.650989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.651150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.651180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.651350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.651378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.651523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.651549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.651675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.651701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.651859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.651887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.652104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.652132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.652284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.652313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.056 [2024-11-02 14:51:53.652465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.056 [2024-11-02 14:51:53.652492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.056 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.652668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.652695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.652846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.652890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.653068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.653096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.653282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.653311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.653476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.653506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.653694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.653720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.653834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.653860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.654936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.654963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.655938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.655965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.656970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.656997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.657150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.657177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.657330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.657356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.657508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.657535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.657695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.657723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.657882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.657912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.658147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.658338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.658495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.658642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.658837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.658994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.659023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.659223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.659254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.659412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.659439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.057 [2024-11-02 14:51:53.659609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.057 [2024-11-02 14:51:53.659638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.057 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.659807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.659834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.660965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.660992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.661110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.661135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.661282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.661317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.661441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.661468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.661638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.661666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.661858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.661887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.662085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.662234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.662389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.662590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.662806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.662984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.663189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.663362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.663569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.663771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.663955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.663982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.664134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.664161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.664272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.664297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.664452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.664480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.664634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.664677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.664853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.664879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.665038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.665064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.665231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.665277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.665478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.665505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.665675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.665704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.665882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.665908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.058 qpair failed and we were unable to recover it. 00:36:02.058 [2024-11-02 14:51:53.666054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.058 [2024-11-02 14:51:53.666081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.666207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.666232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.666418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.666445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.666629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.666655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.666801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.666828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.667039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.667070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.667218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.667246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.667405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.667432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.667612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.667639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.667834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.667861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.668034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.668061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.668252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.668290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.668486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.668512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.668638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.668664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.668814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.668840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.669927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.669954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.670082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.670110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.670269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.670297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.670451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.670492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.670638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.670665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.670819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.670846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.671044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.671277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.671463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.671642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.671815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.671962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.672175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.672417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.672572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.672746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.672929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.672956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.673100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.673126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.673276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.059 [2024-11-02 14:51:53.673320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.059 qpair failed and we were unable to recover it. 00:36:02.059 [2024-11-02 14:51:53.673512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.673540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.673666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.673693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.673865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.673891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.674070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.674096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.674285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.674315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.674458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.674485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.674637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.674663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.674839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.674866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.675098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.675334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.675501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.675666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.675843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.675973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.676153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.676391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.676538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.676687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.676836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.676863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.677040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.677084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.677288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.677315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.677436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.677460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.677612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.677655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.677796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.677825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.678964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.678993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.679138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.679165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.679316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.679343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.679491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.679517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.679735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.679762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.679929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.679967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.680106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.680135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.680334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.680361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.680483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.680510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.680688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.060 [2024-11-02 14:51:53.680732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.060 qpair failed and we were unable to recover it. 00:36:02.060 [2024-11-02 14:51:53.680903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.680929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.681922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.681948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.682134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.682161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.682317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.682344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.682501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.682527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.682701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.682728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.682880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.682905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.683057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.683083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.683211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.683240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.683545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.683575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.683752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.683779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.683973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.684181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.684357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.684560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.684730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.684967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.684994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.685139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.685173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.685349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.685380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.685553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.685579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.685755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.685781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.685974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.686175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.686350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.686543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.686743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.686923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.686949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.687125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.687169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.687337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.687364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.687520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.687548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.687702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.687729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.687859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.687885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.688058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.688084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.688294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.061 [2024-11-02 14:51:53.688325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.061 qpair failed and we were unable to recover it. 00:36:02.061 [2024-11-02 14:51:53.688475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.688502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.688657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.688684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.688850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.688879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.689055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.689083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.689283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.689312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.689439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.689464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.689625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.689652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.689862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.689890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.690076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.690105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.690280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.690307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.690448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.690475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.690626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.690653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.690807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.690834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.691855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.691882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.692931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.692962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.693117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.693144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.693297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.693324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.693498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.693525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.693712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.693742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.693898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.693928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.694123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.694153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.694328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.694355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.694540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.694569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.694739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.694766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.062 [2024-11-02 14:51:53.694958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.062 [2024-11-02 14:51:53.694988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.062 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.695131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.695161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.695358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.695384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.695560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.695589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.695784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.695814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.696042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.696192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.696440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.696648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.696834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.696982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.697209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.697356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.697530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.697701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.697917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.697947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.698158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.698185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.698316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.698342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.698521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.698548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.698699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.698724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.698904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.698930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.699117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.699147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.699340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.699370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.699539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.699566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.699739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.699765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.699905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.699931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.700154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.700181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.700310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.700336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.700466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.700492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.700678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.700705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.700831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.700857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.701964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.701993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.702122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.702152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.702328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.702356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.063 qpair failed and we were unable to recover it. 00:36:02.063 [2024-11-02 14:51:53.702552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.063 [2024-11-02 14:51:53.702581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.702743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.702774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.702956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.702983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.703135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.703317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.703468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.703650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.703826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.703982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.704134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.704335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.704480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.704639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.704842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.704870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.705022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.705049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.705198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.705225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.705424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.705451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.705601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.705628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.705802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.705844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.706012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.706046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.706274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.706318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.706494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.706521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.706650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.706676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.706851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.706877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.707066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.707096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.707268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.707299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.707459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.707485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.707597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.707622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.707828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.707857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.708031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.708221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.708404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.708604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.708809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.708982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.709008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.709183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.709209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.709360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.709391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.709600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.709626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.709751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.064 [2024-11-02 14:51:53.709777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.064 qpair failed and we were unable to recover it. 00:36:02.064 [2024-11-02 14:51:53.709926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.709953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.710066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.710090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.710242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.710278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.710456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.710485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.710644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.710673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.710816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.710844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.711000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.711027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.711201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.711228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.711383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.711425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.711587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.711616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.711819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.711864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.712128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.712181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.712335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.712365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.712510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.712553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.712725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.712771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.712945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.712988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.713165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.713192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.713372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.713400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.713569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.713598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.713782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.713826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.714008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.714036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.714194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.714223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.714375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.714405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.714615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.714660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.714837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.714882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.715057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.715084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.715236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.715268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.715445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.715472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.715641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.715685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.715859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.715902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.716059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.716086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.716265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.716293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.716491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.716537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.716737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.716782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.716966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.716999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.717146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.717174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.717344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.717389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.717568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.717613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.065 [2024-11-02 14:51:53.717810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.065 [2024-11-02 14:51:53.717854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.065 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.717979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.718006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.718158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.718186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.718380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.718426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.718603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.718648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.718790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.718820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.719945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.719989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.720164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.720192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.720341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.720385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.720535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.720579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.720776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.720821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.720996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.721140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.721325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.721507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.721686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.721860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.721888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.722042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.722069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.722251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.722284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.722425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.722469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.722636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.722681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.722824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.722868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.723019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.723046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.723181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.723222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.723396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.723426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.723630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.723660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.723826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.723857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.724048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.724078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.724272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.724317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.724466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.724493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.724688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.724718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.724983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.725016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.725143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.725172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.725350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.066 [2024-11-02 14:51:53.725378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.066 qpair failed and we were unable to recover it. 00:36:02.066 [2024-11-02 14:51:53.725553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.725600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.725774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.725819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.726022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.726066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.726217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.726246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.726430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.726472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.726675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.726718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.726893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.726923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.727080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.727106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.727265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.727291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.727467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.727511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.727708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.727737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.727931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.727974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.728154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.728181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.728347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.728391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.728527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.728556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.728774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.728817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.728992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.729196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.729371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.729556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.729728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.729884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.729911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.730085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.730229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.730445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.730661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.730837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.730986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.731155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.731334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.731531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.731745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.731909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.731935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.732092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.067 [2024-11-02 14:51:53.732119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.067 qpair failed and we were unable to recover it. 00:36:02.067 [2024-11-02 14:51:53.732318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.732363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.732578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.732604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.732754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.732781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.732940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.732971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.733121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.733147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.733311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.733348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.733488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.733532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.733680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.733707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.733862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.733888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.734032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.734231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.734444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.734619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.734814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.734994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.735172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.735342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.735555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.735725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.735912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.735938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.736089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.736115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.736300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.736343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.736548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.736576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.736770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.736814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.736963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.736989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.737137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.737162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.737360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.737403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.737607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.737652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.737826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.737868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.738940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.738967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.739114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.739141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.739307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.739337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.739519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.739561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.068 qpair failed and we were unable to recover it. 00:36:02.068 [2024-11-02 14:51:53.739711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.068 [2024-11-02 14:51:53.739737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.739885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.739913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.740089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.740115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.740303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.740333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.740518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.740561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.740729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.740778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.740934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.740961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.741079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.741105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.741283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.741326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.741480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.741523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.741695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.741737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.741891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.741917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.742069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.742094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.742239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.742272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.742416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.742460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.742632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.742674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.742807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.742850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.743000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.743026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.743206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.743232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.743403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.743431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.743646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.743689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.743864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.743911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.744064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.744091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.744247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.744281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.744463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.744508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.744660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.744702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.744874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.744917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.745063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.745089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.745232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.745271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.745445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.745488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.745658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.745687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.745833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.745876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.746052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.746096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.746272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.746315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.746488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.746532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.746730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.746772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.746971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.747014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.747167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.747193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.069 [2024-11-02 14:51:53.747389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.069 [2024-11-02 14:51:53.747438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.069 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.747636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.747681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.747852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.747896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.748072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.748098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.748252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.748285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.748484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.748527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.748713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.748739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.748921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.748952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.749965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.749992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.750117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.750143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.750294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.750321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.750471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.750497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.750671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.750713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.750861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.750886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.751037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.751063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.751240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.751274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.751418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.751461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.751639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.751682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.751879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.751908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.752099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.752125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.752271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.752297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.752463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.752505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.752651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.752694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.752896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.752939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.753088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.753114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.753278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.753305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.753488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.753513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.753746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.753772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.753921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.753947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.754129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.754155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.754271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.754297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.754498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.754541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.754716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.754764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.754938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.070 [2024-11-02 14:51:53.754963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.070 qpair failed and we were unable to recover it. 00:36:02.070 [2024-11-02 14:51:53.755110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.755135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.755306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.755336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.755499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.755541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.755741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.755783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.755937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.755963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.756081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.756107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.756284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.756310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.756481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.756524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.756704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.756752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.756872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.756897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.757076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.757101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.757248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.757291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.757456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.757499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.757703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.757747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.757891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.757934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.758049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.758074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.758238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.758272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.758473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.758503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.758681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.758724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.758875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.758901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.759956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.759983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.760137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.760163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.760315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.760342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.760491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.760518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.760719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.760761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.760878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.760904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.761052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.761078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.761242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.761274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.761465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.761492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.071 [2024-11-02 14:51:53.761690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.071 [2024-11-02 14:51:53.761719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.071 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.761938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.761981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.762140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.762166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.762330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.762375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.762571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.762615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.762814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.762857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.763939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.763965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.764084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.764111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.764317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.764362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.764552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.764602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.764786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.764812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.764963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.764988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.765129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.765155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.765321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.765351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.765542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.765584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.765762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.765788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.765938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.765964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.766954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.766980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.767161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.767337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.767512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.767656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.767808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.767988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.768145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.768317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.768460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.768663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.768834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.072 [2024-11-02 14:51:53.768860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.072 qpair failed and we were unable to recover it. 00:36:02.072 [2024-11-02 14:51:53.769004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.769152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.769315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.769511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.769707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.769901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.769930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.770129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.770316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.770482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.770653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.770842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.770982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.771010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.771162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.771188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.771367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.771411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.771551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.771595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.771810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.771841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.771995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.772168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.772350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.772562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.772743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.772948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.772974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.773128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.773155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.773321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.773365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.773556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.773582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.773755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.773780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.773927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.773952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.774128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.774153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.774328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.774354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.774561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.774604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.774816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.774861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.774998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.775040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.775192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.775218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.775428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.775472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.775607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.775649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.775847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.775890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.776021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.776063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.776216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.073 [2024-11-02 14:51:53.776241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.073 qpair failed and we were unable to recover it. 00:36:02.073 [2024-11-02 14:51:53.776427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.776470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.776616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.776660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.776857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.776899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.777053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.777078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.777205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.777232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.777413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.777457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.777604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.777629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.777842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.777886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.778030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.778055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.778232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.778266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.778440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.778484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.778696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.778738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.778877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.778906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.779072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.779098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.779249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.779282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.779457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.779483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.779685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.779729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.779872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.779920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.780042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.780068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.780207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.780232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.780440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.780705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.780748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.780965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.781007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.781155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.781181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.781347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.781393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.781574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.781617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.781802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.781845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.781999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.782025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.782174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.782200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.782350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.782397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.782561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.782603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.782789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.782832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.782982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.783007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.783153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.783180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.783322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.783366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.783507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.783557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.783756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.783800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.783976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.784001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.074 qpair failed and we were unable to recover it. 00:36:02.074 [2024-11-02 14:51:53.784115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.074 [2024-11-02 14:51:53.784140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.784308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.784338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.784556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.784600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.784787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.784812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.784961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.784987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.785112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.785139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.785309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.785348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.785553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.785584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.785724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.785754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.785915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.785944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.786116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.786142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.786298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.786325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.786498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.786527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.786717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.786746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.786903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.786932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.787094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.787124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.787305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.787333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.787512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.787538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.787682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.787712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.787877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.787913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.788080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.788110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.788287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.788317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.788487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.788530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.788718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.788761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.788962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.789134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.789328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.789554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.789747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.789922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.789967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.790138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.790169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.790298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.790326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.790455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.790483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.790638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.790664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.790786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.790813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.791009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.791039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.791234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.791269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.791449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.791475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.075 [2024-11-02 14:51:53.791623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.075 [2024-11-02 14:51:53.791653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.075 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.791844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.791873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.792033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.792063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.792262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.792307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.792463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.792491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.792634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.792676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.792877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.792906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.793072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.793102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.793277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.793304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.793484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.793510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.793680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.793709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.793900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.793929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.794069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.794099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.794304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.794334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.794460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.794486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.794623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.794667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.794835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.794878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.795080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.795124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.795248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.795284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.795479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.795523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.795688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.795731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.795907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.795952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.796108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.796137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.796272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.796300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.796477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.796503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.796715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.796745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.796914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.796943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.797105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.797135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.797318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.797347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.797515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.797559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.797708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.797752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.797916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.797961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.798106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.798132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.798263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.798290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.076 [2024-11-02 14:51:53.798467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.076 [2024-11-02 14:51:53.798499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.076 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.798672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.798702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.798880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.798909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.799079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.799107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.799278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.799322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.799501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.799528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.799671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.799701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.799894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.799924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.800101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.800130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.800313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.800343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.800498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.800525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.800726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.800770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.800917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.800961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.801121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.801164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.801340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.801375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.801548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.801592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.801765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.801808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.801955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.801982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.802134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.802162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.802308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.802336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.802482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.802510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.802683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.802713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.802854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.802897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.803103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.803287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.803430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.803612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.803805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.803974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.804169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.804370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.804543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.804740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.804933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.804962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.805126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.805154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.805318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.805345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.805497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.805524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.805696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.805727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.805888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.077 [2024-11-02 14:51:53.805919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.077 qpair failed and we were unable to recover it. 00:36:02.077 [2024-11-02 14:51:53.806049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.806079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.806272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.806315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.806506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.806533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.806681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.806708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.806856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.806902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.807047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.807091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.807272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.807299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.807424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.807451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.807594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.807623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.807816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.807859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.808061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.808104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.808226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.808253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.808436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.808468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.808603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.808631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.808807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.808834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.809972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.809998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.810198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.810226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.810383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.810411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.810562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.810587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.810734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.810763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.810928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.810959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.811116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.811146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.811323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.811350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.811549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.811579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.811721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.811750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.811889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.811919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.812052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.812083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.812275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.812301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.812465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.812492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.812647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.812675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.812823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.812850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.813016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.813045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.813212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.813237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.078 [2024-11-02 14:51:53.813426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.078 [2024-11-02 14:51:53.813453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.078 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.813583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.813610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.813831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.813872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.814934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.814961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.815142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.815171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.815319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.815347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.815525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.815552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.815674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.815700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.815851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.815877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.816075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.816104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.816270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.816298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.816453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.816480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.816636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.816670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.816845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.816874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.817078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.817106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.817247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.817299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.817449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.817476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.817660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.817689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.817870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.817914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.818118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.818145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.818321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.818349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.818498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.818540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.818739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.818784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.819949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.819977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.820124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.820151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.820292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.820319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.820467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.820494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.820686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.820712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.820917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.079 [2024-11-02 14:51:53.820962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.079 qpair failed and we were unable to recover it. 00:36:02.079 [2024-11-02 14:51:53.821134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.821179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.821330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.821357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.821531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.821574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.821754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.821798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.821974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.822020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.822181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.822208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.822365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.822412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.822581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.822625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.822809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.822835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.823016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.823042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.823188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.823216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.823464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.823509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.823714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.823743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.823956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.823982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.824109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.824136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.824308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.824338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.824501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.824530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.824696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.824724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.824871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.824903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.825910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.825937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.826092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.826118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.826299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.826326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.826482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.826508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.826679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.826723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.826877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.826903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.827028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.827054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.827195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.827222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.827377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.827422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.827595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.827642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.080 [2024-11-02 14:51:53.827787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.080 [2024-11-02 14:51:53.827832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.080 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.828009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.828035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.828185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.828212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.828418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.828464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.828641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.828687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.828853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.828897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.829966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.829992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.830138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.830164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.830337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.830383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.830550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.830578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.830764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.830808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.830943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.830969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.831115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.831141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.831312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.831358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.831502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.831545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.831712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.831755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.831899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.831925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.832126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.832310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.832526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.832721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.832871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.832998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.833158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.833325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.833543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.833765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.833932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.833959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.834115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.834142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.834313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.834343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.834545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.834591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.834762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.834811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.834960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.834987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.835117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.081 [2024-11-02 14:51:53.835144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.081 qpair failed and we were unable to recover it. 00:36:02.081 [2024-11-02 14:51:53.835308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.835336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.835482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.835512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.835730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.835774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.835947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.835973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.836130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.836157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.836356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.836401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.836538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.836567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.836752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.836796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.836948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.836975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.837093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.837119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.837267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.837304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.837491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.837518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.837721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.837765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.837922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.837948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.838098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.838125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.838271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.838298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.838476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.838503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.838680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.838706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.838882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.838925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.839102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.839128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.839325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.839356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.839552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.839596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.839739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.839782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.839933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.839961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.840107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.840134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.840282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.840314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.840480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.840529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.840703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.840746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.840900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.840926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.841051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.841078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.841228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.841263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.841430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.841474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.841675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.841719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.841873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.841900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.842044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.842071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.842222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.842248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.842433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.842477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.842638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.082 [2024-11-02 14:51:53.842682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.082 qpair failed and we were unable to recover it. 00:36:02.082 [2024-11-02 14:51:53.842854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.842882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.843059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.843236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.843453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.843683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.843868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.843993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.844020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.844182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.844209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.844393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.844421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.844625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.844668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.844887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.844933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.845111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.845137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.845308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.845338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.845523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.845567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.845777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.845820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.845968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.845995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.846116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.846144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.846340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.846384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.846542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.846568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.846747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.846774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.846927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.846952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.847105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.847133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.847286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.847314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.847461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.847487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.847638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.847685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.847828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.847872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.848023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.848050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.848206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.848238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.848387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.848430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.848573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.848617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.848815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.848859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.849036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.849062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.849215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.849241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.849423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.849468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.849642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.849685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.849860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.849904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.850057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.850083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.850251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.850314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.850499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.850531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.850727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.850756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.850924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.083 [2024-11-02 14:51:53.850954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.083 qpair failed and we were unable to recover it. 00:36:02.083 [2024-11-02 14:51:53.851127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.851156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.851335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.851363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.851511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.851539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.851715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.851743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.851940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.851969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.852163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.852192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.852370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.852397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.852569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.852597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.852790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.852819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.852983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.853012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.853176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.853206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.853427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.853455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.853627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.853656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.853852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.853882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.854045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.854074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.854219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.854246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.854401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.854427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.854627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.854685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.854876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.854905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.855065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.855094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.855293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.855320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.855490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.855518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.855694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.855723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.855896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.855926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.856109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.856136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.856291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.856318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.856470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.856501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.856681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.856710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.856865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.856895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.857058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.857088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.857277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.857323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.857494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.857537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.857680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.857706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.857857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.857883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.858060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.858089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.858249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.858304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.858429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.858456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.858655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.858686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.858850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.858879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.859034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.859063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.859269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.084 [2024-11-02 14:51:53.859296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.084 qpair failed and we were unable to recover it. 00:36:02.084 [2024-11-02 14:51:53.859457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.859485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.859665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.859691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.859821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.859847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.859986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.860206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.860397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.860570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.860762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.860967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.860996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.861153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.861182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.861379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.861406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.861540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.861565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.861745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.861774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.861964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.861993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.862155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.862183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.862341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.862367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.862534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.862559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.862684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.862726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.862861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.862890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.863039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.863066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.863197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.863240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.863446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.863472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.863618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.863661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.863833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.863860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.864055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.864253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.864442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.864629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.864835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.864987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.865031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.865209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.865235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.865396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.865422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.865605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.865635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.865868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.865916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.866107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.866135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.866322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.866349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.085 [2024-11-02 14:51:53.866485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.085 [2024-11-02 14:51:53.866512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.085 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.866713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.866742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.866909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.866935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.867134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.867163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.867340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.867367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.867513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.867554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.867715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.867743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.867940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.867966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.868157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.868186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.868338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.868365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.868521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.868547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.868693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.868720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.868868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.868895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.869069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.869098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.869241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.869277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.869459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.869486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.869616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.869643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.869823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.869869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.870934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.870960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.871162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.871192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.871347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.871374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.871520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.871562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.086 [2024-11-02 14:51:53.871759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.086 [2024-11-02 14:51:53.871786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.086 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.871918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.871943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.872088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.872122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.872306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.872334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.872490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.872516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.872695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.872724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.872889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.872920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.873112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.873141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.873309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.873336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.873490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.873516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.873659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.873685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.873836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.873866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.874002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.874028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.874159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.874187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.874377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.874404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.874554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.874580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.874803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.874829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.875060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.875292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.875471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.875644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.875824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.875995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.876167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.876367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.876554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.876773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.876934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.876963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.877131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.877158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.877360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.877390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.877538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.877567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.877754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.877782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.877978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.878180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.878386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.878526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.878766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.878942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.878983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.087 [2024-11-02 14:51:53.879121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.087 [2024-11-02 14:51:53.879150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.087 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.879347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.879374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.879527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.879553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.879754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.879784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.879949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.879984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.880147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.880177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.880346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.880374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.880551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.880578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.880784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.880813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.881883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.881910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.882080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.882109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.882279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.882306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.882456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.882483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.882638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.882664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.882866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.882895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.883073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.883100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.883251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.883285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.883494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.883521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.883720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.883749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.883891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.883921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.884053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.884081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.884251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.884298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.884498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.884527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.884692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.884721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.884909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.884939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.885136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.885161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.885378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.885408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.885573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.885602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.885778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.885804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.885950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.885975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.886123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.886149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.886301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.886327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.886495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.886524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.886704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.088 [2024-11-02 14:51:53.886730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.088 qpair failed and we were unable to recover it. 00:36:02.088 [2024-11-02 14:51:53.886900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.886930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.887120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.887148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.887325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.887352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.887499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.887527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.887694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.887723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.887868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.887901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.888037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.888066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.888264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.888308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.888457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.888484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.888683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.888713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.888847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.888877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.889047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.889073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.889241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.889280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.889473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.889502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.889666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.889695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.889861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.889889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.890924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.890953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.891115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.891145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.891304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.891331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.891530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.891559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.891722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.891750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.891912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.891941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.892101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.892127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.892301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.892331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.892512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.892538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.892691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.892717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.892866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.892892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.893025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.893066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.089 qpair failed and we were unable to recover it. 00:36:02.089 [2024-11-02 14:51:53.893237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.089 [2024-11-02 14:51:53.893286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.893489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.893515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.893666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.893691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.893874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.893904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.894955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.894984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.895160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.895186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.895342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.895369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.895488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.895514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.895663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.895690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.895860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.895886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.896074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.896315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.896517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.896674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.896826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.896992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.897021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.897188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.897217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.897375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.897401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.897598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.897627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.897800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.897830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.898027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.898053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.898237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.898270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.898448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.898477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.898636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.898665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.898805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.898834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.899002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.899027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.899221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.899249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.899401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.899427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.899620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.899649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.899819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.899844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.900045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.900074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.900230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.900263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.090 qpair failed and we were unable to recover it. 00:36:02.090 [2024-11-02 14:51:53.900435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.090 [2024-11-02 14:51:53.900460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.900607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.900633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.900784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.900815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.900967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.901148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.901346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.901529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.901738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.901916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.901942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.902136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.902179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.902379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.902407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.902533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.902559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.902707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.902733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.902906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.902932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.903121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.903151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.903285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.903329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.903511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.903537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.903748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.903774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.903949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.903978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.904143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.904171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.904310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.904352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.904468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.904494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.904615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.904640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.904909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.904956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.905119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.905149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.905344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.905371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.905495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.905521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.905651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.905677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.905825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.905851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.906042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.906069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.906268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.906313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.906487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.906513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.906711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.906739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.906904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.906929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.907108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.907134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.907284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.907311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.907433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.907459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.907644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.091 [2024-11-02 14:51:53.907670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.091 qpair failed and we were unable to recover it. 00:36:02.091 [2024-11-02 14:51:53.907788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.907814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.907938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.907965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.908130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.908159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.908360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.908387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.908539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.908569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.908744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.908773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.908911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.908940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.909141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.909168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.909312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.909339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.909510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.909552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.909739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.909768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.909903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.909930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.910091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.910135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.910339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.910365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.910533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.910575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.910747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.910774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.910946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.910975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.911144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.911180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.911365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.911393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.911539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.911565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.911709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.911736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.911856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.911883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.912025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.912054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.912251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.912283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.912435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.912461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.912637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.912667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.912835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.912865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.913009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.913036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.913231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.913285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.913431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.913457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.913622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.913651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.913841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.913867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.914032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.914061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.914244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.914284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.914475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.914501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.914648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.914674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.914820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.914847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.915050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.092 [2024-11-02 14:51:53.915080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.092 qpair failed and we were unable to recover it. 00:36:02.092 [2024-11-02 14:51:53.915212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.915241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.915391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.915418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.915574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.915600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.915797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.915826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.916042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.916241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.916437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.916641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.916859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.916999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.917174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.917403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.917603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.917778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.917957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.917985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.918166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.918192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.918320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.918347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.918494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.918520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.918649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.918676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.918847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.918876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.919945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.919970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.920147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.920174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.920320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.920347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.920496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.920522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.920720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.920745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.921026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.921076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.921268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.921312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.921490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.921516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.921670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.921698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.921840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.921869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.922072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.922099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.922232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.922267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.922413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.922439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.093 qpair failed and we were unable to recover it. 00:36:02.093 [2024-11-02 14:51:53.922613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.093 [2024-11-02 14:51:53.922642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.922829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.922856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.923950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.923976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.924143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.924177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.924330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.924357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.924477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.924503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.924653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.924679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.924803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.924829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.925940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.925970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.926164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.926191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.926339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.926367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.926520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.926545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.926698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.926742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.926907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.926933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.927100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.927128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.927308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.927335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.927487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.927513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.927666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.927691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.927840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.927866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.928045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.928074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.928242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.928281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.928450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.928476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.928672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.928701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.928898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.094 [2024-11-02 14:51:53.928924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.094 qpair failed and we were unable to recover it. 00:36:02.094 [2024-11-02 14:51:53.929078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.929122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.929305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.929332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.929468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.929495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.929623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.929648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.929803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.929846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.929986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.930161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.930378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.930531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.930743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.930965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.930994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.931133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.931163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.931359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.931385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.931563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.931589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.931735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.931767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.931937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.931966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.932136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.932161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.932339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.932366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.932561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.932589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.932758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.932787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.932954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.932983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.933147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.933173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.933358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.933385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.933554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.933583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.933753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.933781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.933951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.933976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.934145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.934174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.934322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.934349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.934503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.934530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.934750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.934776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.934939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.934965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.935143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.935294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.935449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.935645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.935867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.935991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.936017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.936162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.936188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.095 [2024-11-02 14:51:53.936341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.095 [2024-11-02 14:51:53.936368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.095 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.936514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.936555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.936752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.936779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.936965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.936990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.937149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.937176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.937304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.937333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.937529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.937559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.937736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.937761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.937937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.937963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.938097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.938126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.938331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.938357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.938508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.938533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.938654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.938698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.938835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.938863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.939876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.939902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.940076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.940105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.940293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.940323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.940519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.940545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.940722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.940748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.940915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.940944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.941136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.941161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.941381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.941412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.941562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.941588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.941762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.941788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.941940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.941968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.942161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.942189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.942361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.942387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.942514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.942540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.942763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.942788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.942939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.942965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.943116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.943143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.943328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.943358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.943604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.943629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.943825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.096 [2024-11-02 14:51:53.943855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.096 qpair failed and we were unable to recover it. 00:36:02.096 [2024-11-02 14:51:53.944030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.944056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.944262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.944292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.944459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.944486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.944655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.944684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.944838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.944864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.945954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.945984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.946137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.946166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.946298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.946325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.946472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.946497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.946675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.946704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.946829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.946858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.947051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.947077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.947248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.947291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.947447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.947473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.947651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.947695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.947840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.947867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.948098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.948325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.948486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.948685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.948868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.948993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.949019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.949191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.949220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.949389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.949416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.949610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.949639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.949839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.949868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.950066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.950236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.950431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.950609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.950812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.950976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.951002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.951201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.097 [2024-11-02 14:51:53.951230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.097 qpair failed and we were unable to recover it. 00:36:02.097 [2024-11-02 14:51:53.951413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.951440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.951594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.951620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.951792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.951819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.951975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.952019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.952190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.952216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.952355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.952382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.952578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.952619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.952791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.952820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.952986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.953013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.953161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.953204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.953384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.953412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.953613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.953656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.953814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.953859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.954029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.954073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.954269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.954314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.954492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.954539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.954737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.954781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.954938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.954965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.955115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.955141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.955294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.955328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.955455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.955482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.955635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.955662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.955886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.955912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.956102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.956129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.956282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.956309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.956510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.956553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.956701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.956744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.956925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.956952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.957101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.957127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.957286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.957322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.957502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.957546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.957717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.957761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.957916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.957942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.958078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.958105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.958288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.958316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.958486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.958528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.958672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.958718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.098 [2024-11-02 14:51:53.958889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.098 [2024-11-02 14:51:53.958915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.098 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.959094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.959120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.959294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.959324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.959516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.959560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.959723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.959767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.959945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.959970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.960095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.960123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.960273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.960300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.960476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.960520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.960691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.960723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.960891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.960920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.961112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.961141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.961277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.961306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.961478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.961508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.961651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.961681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.961847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.961877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.962021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.962050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.962245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.962284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.962465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.962492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.962667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.962697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.962863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.962893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.963123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.963151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.963329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.963361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.963511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.963554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.963742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.963770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.963999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.964027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.964155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.964185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.964360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.964386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.964561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.964603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.964803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.964828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.965020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.965050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.965211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.965240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.965432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.965458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.965583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.965610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.965804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.099 [2024-11-02 14:51:53.965834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.099 qpair failed and we were unable to recover it. 00:36:02.099 [2024-11-02 14:51:53.966022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.966051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.966191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.966220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.966422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.966448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.966652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.966681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.966844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.966873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.967040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.967069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.967216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.967242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.967409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.967435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.967602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.967645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.967801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.967826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.968039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.968068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.968235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.968275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.968443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.968470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.968613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.968638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.968822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.968851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.969022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.969050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.969241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.969280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.969451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.969477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.969651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.969676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.969823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.969853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.970917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.970946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.971198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.971227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.971436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.971468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.971671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.971696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.971824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.971850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.972916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.972946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.973132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.973161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.973332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.100 [2024-11-02 14:51:53.973359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.100 qpair failed and we were unable to recover it. 00:36:02.100 [2024-11-02 14:51:53.973500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.973526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.973722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.973751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.973936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.973964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.974154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.974183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.974346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.974372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.974526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.974573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.974710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.974738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.974894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.974924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.975083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.975111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.975293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.975319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.975449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.975476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.975628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.975654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.975830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.975861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.976045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.976087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.976278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.976322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.976446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.976472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.976683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.976739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.976951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.976996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.977118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.977145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.977280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.977308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.977447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.977495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.977700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.977745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.977915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.977961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.978116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.978141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.978311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.978340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.978531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.978576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.978756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.978799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.978953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.978980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.979155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.979182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.979362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.979414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.979560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.979604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.979745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.979776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.979967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.979999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.980172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.980202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.980368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.980398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.980559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.980589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.980717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.980749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.980945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.980974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.101 qpair failed and we were unable to recover it. 00:36:02.101 [2024-11-02 14:51:53.981141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.101 [2024-11-02 14:51:53.981170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.981344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.981370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.981537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.981566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.981701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.981729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.981851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.981880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.982013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.982043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.982231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.982264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.982424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.982450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.982646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.982674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.982827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.982856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.983941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.983967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.984091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.984118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.984234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.984268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.984430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.984456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.984605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.984631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.984842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.984871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.985968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.985997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.986185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.986213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.986375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.986401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.986550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.986579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.986742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.986772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.986925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.986960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.987168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.987197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.987362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.987389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.987551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.987580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.987779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.987804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.987979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.988008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.988142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.988170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.988347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.102 [2024-11-02 14:51:53.988373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.102 qpair failed and we were unable to recover it. 00:36:02.102 [2024-11-02 14:51:53.988526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.988553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.988725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.988754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.988887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.988917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.989052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.989081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.989245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.989279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.989436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.989463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.989657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.989683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.989806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.989832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.990001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.990030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.990223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.990253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.990443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.990469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.990617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.990644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.990820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.990849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.991012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.991041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.991209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.991238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.991444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.991469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.991608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.991634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.991799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.991828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.992022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.992050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.992223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.992249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.992416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.992442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.992615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.992645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.992863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.992892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.993020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.993049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.993209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.993235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.993396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.993422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.993574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.993600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.993802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.993846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.994079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.994107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.994242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.994295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.994427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.994453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.994620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.994649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.994801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.994835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.103 qpair failed and we were unable to recover it. 00:36:02.103 [2024-11-02 14:51:53.995019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.103 [2024-11-02 14:51:53.995044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.995252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.995289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.995461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.995487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.995614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.995640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.995766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.995806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.995935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.995963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.996148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.996177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.996319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.996345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.996503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.996528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.996692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.996720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.996864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.996891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.997054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.997082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.997266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.997293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.997426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.997452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.997596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.997623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.997848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.997876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.998970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.998999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.999158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.999188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.999366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.999391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.999547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.999572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.999772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:53.999801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:53.999987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.000156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.000359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.000557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.000750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.000967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.000996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.001182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.001208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.001362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.001388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.001541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.001568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.001726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.001752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.001956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.001984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.002118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.002147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.104 [2024-11-02 14:51:54.002351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.104 [2024-11-02 14:51:54.002377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.104 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.002524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.002551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.002710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.002740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.002899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.002928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.003114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.003142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.003318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.003344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.003528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.003557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.003703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.003729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.003853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.003880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.004961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.004986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.005166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.005195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.005360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.005386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.005535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.005561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.005709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.005735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.005860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.005885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.006968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.006996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.007196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.007224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.007430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.007457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.007657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.007690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.007837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.007863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.008948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.008977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.009157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.009185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.009332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.009359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.009511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.105 [2024-11-02 14:51:54.009557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.105 qpair failed and we were unable to recover it. 00:36:02.105 [2024-11-02 14:51:54.009693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.009721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.009858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.009887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.010085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.010111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.010281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.010324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.010479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.010505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.010649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.010679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.010845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.010873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.011026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.011052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.011207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.011233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.011431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.011456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.011584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.011610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.011807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.011835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.012948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.012978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.013149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.013174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.013366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.013395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.013578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.013628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.013766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.013796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.013989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.014170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.014344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.014570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.014758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.014911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.014937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.015093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.015119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.015281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.015329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.015452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.015477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.015648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.015677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.015885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.015912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.016036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.016062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.016262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.016306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.016461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.016487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.016663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.016691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.016828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.106 [2024-11-02 14:51:54.016857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.106 qpair failed and we were unable to recover it. 00:36:02.106 [2024-11-02 14:51:54.017045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.017072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.017273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.017304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.017451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.017480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.017661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.017687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.017830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.017855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.017988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.018032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.018205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.018233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.018410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.018436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.018553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.018578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.018739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.018783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.018973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.019161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.019345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.019525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.019729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.019898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.019925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.020969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.020998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.021178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.021204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.021367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.021394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.021568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.021594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.021743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.021772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.021992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.022147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.022371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.022546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.022695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.022867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.022897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.023073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.023099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.023273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.023319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.023493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.023519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.023655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.023684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.023833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.023861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.024031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.024061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.107 [2024-11-02 14:51:54.024223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.107 [2024-11-02 14:51:54.024250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.107 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.024434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.024459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.024573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.024598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.024745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.024789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.024991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.025017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.025207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.025235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.025423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.025448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.025627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.025656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.025845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.025874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.026044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.026073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.026298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.026324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.026483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.026511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.026649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.026679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.026871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.026900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.027070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.027297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.027469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.027662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.027816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.027989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.028189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.028351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.028508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.028728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.028894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.028925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.029098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.029125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.029303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.029329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.029480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.029506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.029629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.029655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.029832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.029860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.030032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.030058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.030212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.030238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.030375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.030400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.030548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.108 [2024-11-02 14:51:54.030577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.108 qpair failed and we were unable to recover it. 00:36:02.108 [2024-11-02 14:51:54.030723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.030749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.030909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.030938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.031101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.031129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.031268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.031313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.031460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.031487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.031662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.031691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.031892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.031921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.032080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.032108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.032290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.032317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.032512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.032542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.032739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.032767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.032903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.032930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.033096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.033122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.033300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.033331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.033529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.033555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.033737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.033762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.033919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.033944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.034121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.034320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.034480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.034674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.034858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.034987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.035195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.035406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.035607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.035802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.035958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.035987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.036122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.036148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.036296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.036322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.036513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.036539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.036683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.036708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.036892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.036918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.109 [2024-11-02 14:51:54.037855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.109 [2024-11-02 14:51:54.037881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.109 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.038865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.038892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.039916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.039942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.040946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.040971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.041117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.041142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.041306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.041333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.041507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.041533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.041684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.041710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.041827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.041854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.042879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.042906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.043923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.043950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.044068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.044095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.044242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.044277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.044456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.110 [2024-11-02 14:51:54.044482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.110 qpair failed and we were unable to recover it. 00:36:02.110 [2024-11-02 14:51:54.044632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.044658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.044807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.044831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.044953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.044983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.045971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.045997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.046172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.046199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.046360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.046387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.046562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.046587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.046735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.046760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.046912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.046936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.047076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.047288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.047498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.047677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.047857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.047983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.048157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.048333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.048531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.048725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.048935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.048961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.049141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.049169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.049316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.049341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.049494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.049520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.049681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.049706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.049888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.049915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.050067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.050245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.050455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.050621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.050799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.050991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.051017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.051169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.051194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.051742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.111 [2024-11-02 14:51:54.051776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.111 qpair failed and we were unable to recover it. 00:36:02.111 [2024-11-02 14:51:54.051927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.051956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.052129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.052155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.052291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.052317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.052440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.052466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.052651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.052685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.052848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.052874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.053060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.053089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.053239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.053275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.053444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.053488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.053670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.053696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.053819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.053845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.054959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.054989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.055165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.055191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.055359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.055385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.055512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.055537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.055695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.055722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.055870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.055895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.056049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.056074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.056207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.056233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.056405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.056432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.056583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.056609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.056811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.056865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.057033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.057062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.057212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.057239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.057421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.057446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.057651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.057701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.057896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.057925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.058096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.058125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-11-02 14:51:54.058276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.112 [2024-11-02 14:51:54.058303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.058436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.058462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.058653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.058679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.058855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.058899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.059051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.059077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.059226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.059252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.059422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.059448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.059618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.059646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.059838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.059864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.060884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.060910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.061860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.061886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.062112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.062140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.062332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.062359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.062487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.062512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.062677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.062703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.062926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.062951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.063142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.063172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.063333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.063363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.063505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.063531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.063683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.063726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.063895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.063924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.064093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.064121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.064315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.064342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.064526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.064556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.064764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.064790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.064918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.064943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.065116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.065143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.065330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.065357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.065489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.113 [2024-11-02 14:51:54.065515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-11-02 14:51:54.065678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.065721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.065904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.065930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.066084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.066120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.066271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.066298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.066462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.066491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.066682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.066708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.066871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.066901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.067103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.067129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.067266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.067309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.067462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.067489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.067642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.067669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.067845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.067874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.068006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.068042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.068231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.068268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.068442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.068469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.068637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.068666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.068829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.068857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.069035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.069061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.069209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.069238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.069416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.069442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-11-02 14:51:54.069629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.114 [2024-11-02 14:51:54.069659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.069829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.069856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.069982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.070869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.070989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.071163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.071321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.071472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.071670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.397 qpair failed and we were unable to recover it. 00:36:02.397 [2024-11-02 14:51:54.071893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.397 [2024-11-02 14:51:54.071919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.072077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.072115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.072319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.072346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.072468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.072494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.072630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.072656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.072783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.072828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.073013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.073040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.073167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.073193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.073373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.073400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.073585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.073635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.073847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.073873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.074006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.074031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.074213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.074243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.074442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.074471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.074634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.074664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.074856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.074884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.075064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.075290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.075488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.075671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.075847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.075986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.076139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.076353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.076511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.076733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.076928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.076954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.077100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.077126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.077310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.077338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.077522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.077576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.077763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.077793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.077960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.077989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.078161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.078187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.078356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.078382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.078532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.078574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.078724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.078753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.078898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.078923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.079086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.079113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.398 [2024-11-02 14:51:54.079301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.398 [2024-11-02 14:51:54.079329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.398 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.079477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.079503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.079629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.079654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.079833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.079862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.080049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.080078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.080248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.080303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.080434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.080459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.080622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.080648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.080779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.080806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.081951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.081976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.082098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.082141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.082284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.082328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.082475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.082500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.082680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.082705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.082860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.082887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.083035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.083060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.083214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.083268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.083429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.083454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.083627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.083658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.083826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.083852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.084956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.084982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.085179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.085208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.085422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.085449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.085601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.085632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.085781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.085806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.085936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.085963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.086116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.086141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.086339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.086365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.086484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.399 [2024-11-02 14:51:54.086511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.399 qpair failed and we were unable to recover it. 00:36:02.399 [2024-11-02 14:51:54.086647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.086673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.086847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.086890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.087076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.087104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.087241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.087287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.087464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.087494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.087658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.087685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.087856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.087885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.088081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.088107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.088264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.088293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.088436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.088465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.088649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.088674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.088819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.088846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.089953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.089981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.090155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.090191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.090337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.090363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.090540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.090574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.090746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.090774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.090940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.090973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.091170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.091196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.091349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.091379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.091581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.091607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.091765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.091790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.091963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.091988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.092199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.092228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.092401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.092430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.092568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.092597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.092740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.092766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.092917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.092961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.093131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.093159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.093292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.093334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.093460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.093485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.093676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.093702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.093878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.400 [2024-11-02 14:51:54.093906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.400 qpair failed and we were unable to recover it. 00:36:02.400 [2024-11-02 14:51:54.094063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.094091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.094253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.094308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.094449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.094479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.094658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.094683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.094808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.094833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.095872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.095898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.096905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.096929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.097083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.097110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.097245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.097279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.097441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.097467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.097622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.097648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.097826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.097852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.098865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.098983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.099164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.099379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.099563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.099732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.099887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.099913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.100043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.100068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.100218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.100245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.100400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.100427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.100590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.100616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.401 qpair failed and we were unable to recover it. 00:36:02.401 [2024-11-02 14:51:54.100764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.401 [2024-11-02 14:51:54.100790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.100915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.100942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.101094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.101119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.101278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.101304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.101454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.101481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.101632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.101658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.101812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.101837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.102901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.102928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.103941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.103966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.104146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.104325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.104529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.104679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.104828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.104978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.105160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.105325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.105507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.105714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.105891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.105916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.106039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.106067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.402 [2024-11-02 14:51:54.106217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.402 [2024-11-02 14:51:54.106242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.402 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.106409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.106435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.106555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.106582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.106729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.106755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.106879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.106906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.107865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.107893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.108904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.108929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.109904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.109934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.110108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.110325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.110505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.110647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.110804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.110982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.111158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.111314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.111489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.111676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.111852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.111882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.112045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.112269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.112456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.112634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.403 [2024-11-02 14:51:54.112805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.403 qpair failed and we were unable to recover it. 00:36:02.403 [2024-11-02 14:51:54.112928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.112954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.113939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.113963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.114118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.114293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.114498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.114701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.114854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.114986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.115866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.115983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.116129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.116286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.116484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.116650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.116824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.116853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.117878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.117905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.118853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.118879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.119020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.119046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.119207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.404 [2024-11-02 14:51:54.119234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.404 qpair failed and we were unable to recover it. 00:36:02.404 [2024-11-02 14:51:54.119439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.119481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.119641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.119669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.119796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.119824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.119977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.120181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.120361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.120518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.120696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.120867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.120893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.121916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.121941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.122886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.122911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.123850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.123877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.124954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.124980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.125154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.125353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.125527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.125704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.125859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.125982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.405 [2024-11-02 14:51:54.126007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.405 qpair failed and we were unable to recover it. 00:36:02.405 [2024-11-02 14:51:54.126125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.126158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.126304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.126330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.126485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.126511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.126690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.126716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.126842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.126867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.127916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.127941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.128129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.128309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.128461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.128634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.128808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.128982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.129134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.129282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.129460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.129636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.129808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.129833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.130877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.130903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.406 qpair failed and we were unable to recover it. 00:36:02.406 [2024-11-02 14:51:54.131921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.406 [2024-11-02 14:51:54.131948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.132099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.132125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.132287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.132313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.132469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.132494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.132645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.132670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.132796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.132820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.133936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.133961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.134961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.134987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.135115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.135141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.135268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.135295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.135447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.135473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.135619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.135645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.135805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.135831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.136844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.136870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.137903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.137928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.138055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.138085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.138267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.138294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.138422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.138447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.138603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.407 [2024-11-02 14:51:54.138628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.407 qpair failed and we were unable to recover it. 00:36:02.407 [2024-11-02 14:51:54.138779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.138803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.138927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.138953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.139080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.139106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.139271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.139297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.139447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.139472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.139629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.139655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.139806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.139830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.140913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.140939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.141114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.141311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.141469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.141644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.141823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.141977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.142964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.142991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.143114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.143139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.143316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.143343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.143465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.143491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.143640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.143666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.143821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.143847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.144863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.144889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.145008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.145034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.145149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.145179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.145302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.408 [2024-11-02 14:51:54.145329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.408 qpair failed and we were unable to recover it. 00:36:02.408 [2024-11-02 14:51:54.145482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.145507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.145680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.145705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.145843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.145869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.146870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.146897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.147952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.147977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.148175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.148334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.148514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.148685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.148857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.148975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.149129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.149304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.149469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.149645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.149822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.149851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.150883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.150909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.151912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.151936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.152086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.409 [2024-11-02 14:51:54.152112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.409 qpair failed and we were unable to recover it. 00:36:02.409 [2024-11-02 14:51:54.152232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.152264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.152427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.152452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.152578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.152604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.152751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.152776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.152924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.152949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.153942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.153967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.154925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.154951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.155918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.155945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.156960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.156985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.157165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.157192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.157338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.157364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.157487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.157513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.157654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.157679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.157829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.157856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.158032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.158205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.158374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.158548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.410 [2024-11-02 14:51:54.158716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.410 qpair failed and we were unable to recover it. 00:36:02.410 [2024-11-02 14:51:54.158860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.158886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.159875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.159901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.160901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.160927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.161968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.161993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.162171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.162346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.162520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.162695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.162846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.162987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.163174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.163355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.163552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.163703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.163883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.163909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.164056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.164081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.164252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.164286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.164436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.164462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.164616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.164642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.411 [2024-11-02 14:51:54.164795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.411 [2024-11-02 14:51:54.164822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.411 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.164967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.164993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.165138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.165163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.165313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.165339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.165487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.165513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.165665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.165690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.165837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.165863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.166842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.166868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.167900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.167926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.168937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.168963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.169940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.169966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.170142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.170342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.170498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.170644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.170818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.170982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.171013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.171157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.171185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.171335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.412 [2024-11-02 14:51:54.171362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.412 qpair failed and we were unable to recover it. 00:36:02.412 [2024-11-02 14:51:54.171515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.171540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.171694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.171721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.171877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.171902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.172901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.172927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.173920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.173946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.174895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.174921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.175902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.175928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.176936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.176962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.177117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.177143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.177298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.177325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.177471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.177497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.177657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.177683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.177833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.177860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-11-02 14:51:54.178010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.413 [2024-11-02 14:51:54.178036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.178188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.178358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.178515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.178688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.178845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.178974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.179945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.179971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.180110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.180140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.180293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.180319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.180469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.180495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.180651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.180676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.180825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.180850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.181898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.181922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.182875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.182900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.183944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.183969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.184138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.184163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.184336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.184363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.184476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.184501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-11-02 14:51:54.184649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.414 [2024-11-02 14:51:54.184675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.184822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.184848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.185837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.185861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.186947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.186973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.187952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.187977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.188129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.188154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.188313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.188340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.188491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.188516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.188666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.188690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.188836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.188861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.189839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.189864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.190042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.190067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.190217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.190242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.190373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.190399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.190530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.415 [2024-11-02 14:51:54.190555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-11-02 14:51:54.190677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.190702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.190852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.190877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.191895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.191925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.192104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.192291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.192465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.192640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.192840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.192993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.193195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.193352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.193557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.193763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.193935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.193960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.194127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.194310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.194486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.194670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.194830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.194981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.195157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.195317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.195494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.195671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.195850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.195877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.196887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.196913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.197057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.197082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.197205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.197232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.416 [2024-11-02 14:51:54.197366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.416 [2024-11-02 14:51:54.197392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.416 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.197538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.197563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.197692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.197717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.197894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.197921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.198937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.198962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.199945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.199972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.200119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.200144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.200305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.200331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.200456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.200481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.200658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.200684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.200828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.200853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.201838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.201863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.202913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.202938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.203951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.203977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.417 qpair failed and we were unable to recover it. 00:36:02.417 [2024-11-02 14:51:54.204129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.417 [2024-11-02 14:51:54.204155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.204307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.204334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.204480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.204505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.204652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.204676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.204832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.204857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.204980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.205149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.205330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.205508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.205683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.205874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.205900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.206890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.206916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.207866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.207891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.208926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.208952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.209151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.209332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.209475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.209651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.209822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.209974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.210124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.210304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.210481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.210681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.418 qpair failed and we were unable to recover it. 00:36:02.418 [2024-11-02 14:51:54.210825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.418 [2024-11-02 14:51:54.210854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.211911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.211936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.212963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.212988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.213143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.213169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.213348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.213374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.213529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.213555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.213685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.213710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.213835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.213861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.214923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.214948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.215972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.215998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.216945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.216970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.217121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.217147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.217302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.217330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.419 [2024-11-02 14:51:54.217506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.419 [2024-11-02 14:51:54.217533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.419 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.217658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.217685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.217829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.217855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.217986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.218875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.218992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.219189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.219348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.219501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.219659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.219860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.219885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.220838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.220863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.221920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.221945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.222123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.222291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.222494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.222671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.222832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.222983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.223009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.223170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.223196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.223322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.420 [2024-11-02 14:51:54.223349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.420 qpair failed and we were unable to recover it. 00:36:02.420 [2024-11-02 14:51:54.223522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.223548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.223669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.223696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.223845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.223871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.224872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.224898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.225866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.225984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.226161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.226310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.226508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.226709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.226865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.226891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.227925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.227950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.228967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.228993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.229144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.229169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.229312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.229339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.229514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.229540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.229694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.229720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.229872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.229897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.421 [2024-11-02 14:51:54.230017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.421 [2024-11-02 14:51:54.230043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.421 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.230188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.230213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.230377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.230402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.230575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.230601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.230724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.230751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.230875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.230900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.231880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.231905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.232872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.232898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.233028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.233054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.233237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.233270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.233444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.233470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.233622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.233646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.233815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.233842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.234904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.234929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.235944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.235970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.236089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.236115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.236244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.236290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.236467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.236493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.236624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.236650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.422 [2024-11-02 14:51:54.236778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.422 [2024-11-02 14:51:54.236803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.422 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.236941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.236967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.237907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.237932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.238839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.238864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.239940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.239965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.240947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.240973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.241963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.241988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.242159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.242184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.242354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.242380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.242536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.242561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.242796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.242822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.242987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.243013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.243156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.243181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.243307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.243333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.243512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.423 [2024-11-02 14:51:54.243537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.423 qpair failed and we were unable to recover it. 00:36:02.423 [2024-11-02 14:51:54.243667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.243692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.243822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.243847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.243966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.243992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.244169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.244194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.244330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.244356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.244507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.244531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.244691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.244716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.244896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.244922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.245938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.245963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.246140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.246326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.246504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.246680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.246828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.246976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.247120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.247271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.247566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.247744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.247912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.247937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.248895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.248922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.249075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.249101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.249281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.249307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.424 qpair failed and we were unable to recover it. 00:36:02.424 [2024-11-02 14:51:54.249432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.424 [2024-11-02 14:51:54.249457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.249572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.249595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.249747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.249772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.249918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.249944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.250951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.250977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.251127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.251296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.251472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.251623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.251800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.251978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.252875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.252996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.253180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.253356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.253512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.253712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.253883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.253908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.254917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.254942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.255908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.425 [2024-11-02 14:51:54.255935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.425 qpair failed and we were unable to recover it. 00:36:02.425 [2024-11-02 14:51:54.256059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.256086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.256237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.256279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.256455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.256481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.256633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.256658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.256843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.256869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.257830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.257860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.258896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.258922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.259937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.259963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.260959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.260984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.261138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.261164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.261310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.261337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.261487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.261513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.261665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.261691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.261835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.261861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.262040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.262243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.262400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.262575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.426 [2024-11-02 14:51:54.262757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.426 qpair failed and we were unable to recover it. 00:36:02.426 [2024-11-02 14:51:54.262908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.262934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.263952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.263978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.264123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.264149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.264308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.264334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.264450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.264475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.264651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.264677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.264829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.264856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.265843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.265870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.266846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.266872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.267940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.267966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.268919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.268943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.269067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.269094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.269215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.269241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.269372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.427 [2024-11-02 14:51:54.269398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.427 qpair failed and we were unable to recover it. 00:36:02.427 [2024-11-02 14:51:54.269569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.269598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.269745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.269771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.269920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.269946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.270919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.270945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.271069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.271095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.271318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.271345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.271504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.271530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.271665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.271690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.271840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.271867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.272880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.272907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.273864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.273889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.274880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.274905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.275959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.275984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.428 [2024-11-02 14:51:54.276104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.428 [2024-11-02 14:51:54.276130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.428 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.276281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.276307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.276461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.276488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.276634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.276664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.276795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.276822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.276984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.277131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.277316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.277519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.277665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.277840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.277866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.278893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.278918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.279126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.279305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.279483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.279657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.279830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.279980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.280157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.280370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.280545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.280691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.280866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.280893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.281910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.281938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.282091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.282116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.429 qpair failed and we were unable to recover it. 00:36:02.429 [2024-11-02 14:51:54.282241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.429 [2024-11-02 14:51:54.282272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.282423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.282449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.282596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.282621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.282774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.282800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.282912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.282937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.283927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.283953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.284970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.284996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.285147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.285335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.285511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.285660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.285837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.285985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.286853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.286986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.287190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.287340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.287524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.287667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.287841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.287866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.288040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.288191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.288373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.288577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.430 [2024-11-02 14:51:54.288752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.430 qpair failed and we were unable to recover it. 00:36:02.430 [2024-11-02 14:51:54.288920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.288945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.289096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.289123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.289299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.289326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.289476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.289502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.289648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.289674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.289820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.289850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.290893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.290919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.291889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.291914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.292069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.292096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.292274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.292300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.292478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.292503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.292677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.292703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.292863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.292889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.293869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.293895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.294858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.294884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.295031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.295057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.295204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.295230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.295392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.295418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.295551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.431 [2024-11-02 14:51:54.295577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.431 qpair failed and we were unable to recover it. 00:36:02.431 [2024-11-02 14:51:54.295697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.295722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.295850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.295877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.296863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.296990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.297164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.297315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.297518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.297664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.297867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.297893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.298900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.298926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.299873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.299899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.300858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.300884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.301890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.301916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.302040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.432 [2024-11-02 14:51:54.302065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.432 qpair failed and we were unable to recover it. 00:36:02.432 [2024-11-02 14:51:54.302215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.302241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.302369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.302394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.302522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.302549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.302699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.302724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.302876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.302902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.303854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.303879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.304902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.304928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.305908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.305933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.306883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.306914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.307897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.307923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.433 qpair failed and we were unable to recover it. 00:36:02.433 [2024-11-02 14:51:54.308072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.433 [2024-11-02 14:51:54.308097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.308279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.308305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.308450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.308477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.308644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.308669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.308842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.308867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.308985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.309186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.309397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.309544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.309719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.309894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.309920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.310115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.310292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.310449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.310621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.310831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.310980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.311161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.311314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.311492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.311681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.311867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.311891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.312862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.312888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.313848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.313874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.314033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.314059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.314208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.314233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.314358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.314385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.314534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.434 [2024-11-02 14:51:54.314559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.434 qpair failed and we were unable to recover it. 00:36:02.434 [2024-11-02 14:51:54.314706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.314732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.314853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.314879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.315859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.315885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.316956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.316981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.317123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.317148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.317327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.317353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.317492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.317519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.317669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.317694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.317872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.317897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.318966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.318991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.319968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.319994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.320148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.320324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.320499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.320666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.320868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.320997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.321022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.321178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.435 qpair failed and we were unable to recover it. 00:36:02.435 [2024-11-02 14:51:54.321357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.435 [2024-11-02 14:51:54.321384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.321510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.321536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.321690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.321716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.321837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.321863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.321982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.322183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.322354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.322498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.322676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.322854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.322880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.323912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.323937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.324083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.324109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.324304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.324331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.324475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.324500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.324690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.324715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.324868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.324893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.325967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.325993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.326142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.326167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.326281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.326317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.326473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.326499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.326650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.326676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.326858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.326883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.327928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.327954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.328126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.436 [2024-11-02 14:51:54.328152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.436 qpair failed and we were unable to recover it. 00:36:02.436 [2024-11-02 14:51:54.328302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.328327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.328477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.328503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.328656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.328683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.328825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.328850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.329873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.329898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.330074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.330251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.330442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.330629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.330832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.330991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.331142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.331329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.331483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.331634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.331851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.331876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.332869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.332895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.333885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.333911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.334934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.437 [2024-11-02 14:51:54.334959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.437 qpair failed and we were unable to recover it. 00:36:02.437 [2024-11-02 14:51:54.335110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.335135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.335281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.335307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.335435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.335465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.335603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.335638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.335799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.335825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.335974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.336153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.336328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.336475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.336677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.336856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.336881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.337967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.337993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.338123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.338148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.338300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.338327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.338487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.338513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.338650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.338676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.338827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.338852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.339861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.339887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.340059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.340273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.340445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.340652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.340823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.340994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.438 [2024-11-02 14:51:54.341020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.438 qpair failed and we were unable to recover it. 00:36:02.438 [2024-11-02 14:51:54.341165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.341309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.341487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.341625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.341792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.341965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.341990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.342140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.342165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.342284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.342310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.342450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.342475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.342651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.342680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.342856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.342883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.343868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.343893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.344931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.344955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.345882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.345907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.346952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.346977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.347115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.347140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.347286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.347311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.347447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.347473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.439 [2024-11-02 14:51:54.347623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.439 [2024-11-02 14:51:54.347648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.439 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.347824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.347849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.347973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.347998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.348151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.348328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.348500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.348659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.348872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.348998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.349934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.349960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.350941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.350966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.351142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.351294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.351449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.351631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.351807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.351988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.352191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.352393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.352555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.352735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.352938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.352962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.353972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.353996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.354141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.354166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.354315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.354345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.354516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.354540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.440 qpair failed and we were unable to recover it. 00:36:02.440 [2024-11-02 14:51:54.354690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.440 [2024-11-02 14:51:54.354716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.354886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.354911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.355925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.355950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.356919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.356944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.357968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.357993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.358166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.358191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.358321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.358347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.358473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.358498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.358672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.358696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.358844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.358869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.359874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.359899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.360964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.360989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.361138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.361162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.361347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.361373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.361524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.361553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.361734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.361759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.361918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.361944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.441 [2024-11-02 14:51:54.362093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.441 [2024-11-02 14:51:54.362119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.441 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.362275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.362306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.362443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.362468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.362625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.362650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.362812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.362837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.362981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.363973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.363999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.364154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.364354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.364518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.364679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.364831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.364984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.365838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.365988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.366155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.366333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.366512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.366691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.366864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.366890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.367893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.367918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.368046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.368073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.368224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.368250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.442 [2024-11-02 14:51:54.368423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.442 [2024-11-02 14:51:54.368449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.442 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.368628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.368653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.368803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.368827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.368949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.368975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.369935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.369960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.370934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.370959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.371962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.371987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.372155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.372180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.372329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.372355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.372480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.372506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.372653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.372678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.372824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.372850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.373005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.373030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.373158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.373190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.373310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.373336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.373484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.373509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.443 qpair failed and we were unable to recover it. 00:36:02.443 [2024-11-02 14:51:54.373667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.443 [2024-11-02 14:51:54.373693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.373854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.373879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.374077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.374249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.374427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.374629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.374826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.374992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.375169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.375367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.375525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.375706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.375885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.375912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.376962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.376988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.377119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.377143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.377287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.377319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.377490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.377525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.377652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.377677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.377830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.377857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.378889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.378914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.379961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.379986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.380166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.380191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.380346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.380374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.380552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.380577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.380722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.380747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.380882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.380908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.381054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.381080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.381206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.381231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.381392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.381418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.381602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.381628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.444 qpair failed and we were unable to recover it. 00:36:02.444 [2024-11-02 14:51:54.381759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.444 [2024-11-02 14:51:54.381784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.381933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.381959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.382927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.382952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.383954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.383980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.384131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.384309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.384481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.384678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.384850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.384998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.385971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.385997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.386142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.386290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.386480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.386665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.386841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.386988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.387970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.387996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.388142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.388169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.388322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.388348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.388494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.388520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.388684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.388709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.388832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.388857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.389014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.389040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.389162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.389188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.445 [2024-11-02 14:51:54.389342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.445 [2024-11-02 14:51:54.389368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.445 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.389543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.389568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.389719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.389749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.389868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.389893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.390907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.390933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.391929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.391954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.392163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.392321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.392499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.392671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.392849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.392982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.393974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.393999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.394955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.394980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.395154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.395181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.395328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.395354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.395482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.395508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.395653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.395679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.395836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.395861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.396947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.396973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.446 [2024-11-02 14:51:54.397125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.446 [2024-11-02 14:51:54.397151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.446 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.397270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.397296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.397448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.397472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.397618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.397644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.397794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.397819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.397958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.397984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.398943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.398968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.399124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.399148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.399281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.399308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.399457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.399482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.399636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.399661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.399842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.399867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.400858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.400882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.401033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.401059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.401214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.401240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.401429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.401455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.401585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.401610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.402966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.402991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.403173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.403198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.403368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.403394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.403545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.403570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.403695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.403720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.403857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.403883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.404024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.404049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.404191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.447 [2024-11-02 14:51:54.404216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.447 qpair failed and we were unable to recover it. 00:36:02.447 [2024-11-02 14:51:54.404351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.404377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.404551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.404576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.404723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.404748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.404877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.404903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.405957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.405982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.406162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.406187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.406359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.406386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.406511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.406536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.406698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.406724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.406904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.406930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.407942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.407967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.408148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.408173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.408325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.408351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.408538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.408564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.408725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.408751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.408900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.408925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.409902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.409927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.410099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.410281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.410460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.410636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.410838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.410985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.411163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.411351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.411506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.411684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.411856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.411881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.412007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.412033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.412164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.412188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.448 qpair failed and we were unable to recover it. 00:36:02.448 [2024-11-02 14:51:54.412345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.448 [2024-11-02 14:51:54.412371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.412490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.412515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.412692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.412717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.412846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.412872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.413846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.413871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.414854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.414878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.415875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.415899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.416080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.416251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.416458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.416657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.416830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.416993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.417189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.417371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.417530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.417744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.417897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.417922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.418899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.418924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.419893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.419918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.449 [2024-11-02 14:51:54.420092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.449 [2024-11-02 14:51:54.420116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.449 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.420226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.420251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.420420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.420446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.420573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.420598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.420723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.420752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.420875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.420900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.421950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.421975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.422150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.422174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.422325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.422351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.422508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.422533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.422689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.422714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.422865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.422891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.423932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.423956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.424108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.424318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.424491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.424651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.424823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.424976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.425957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.425981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.426133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.426158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.426289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.426316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.426470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.426495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.426631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.426658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.426828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.426858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.427024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.427049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.450 [2024-11-02 14:51:54.427201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.450 [2024-11-02 14:51:54.427225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.450 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.427347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.427373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.427502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.427527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.427711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.427735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.427873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.427902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.428820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.428853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.429035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.429226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.429456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.429634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.429813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.429983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.430134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.430307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.430502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.430699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.430896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.430930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.431104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.431131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.431279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.431314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.431499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.431527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.431693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.431720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.431844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.431878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.432031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.432060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.432226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.432280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.432439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.432466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.736 qpair failed and we were unable to recover it. 00:36:02.736 [2024-11-02 14:51:54.432633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.736 [2024-11-02 14:51:54.432666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.432819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.432867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.433959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.433984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.434129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.434155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.434302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.434328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.434504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.434529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.434705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.434747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.434895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.434924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.435967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.435993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.436114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.436140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.436296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.436326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.436465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.436491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.436652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.436679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.436831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.436857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.437945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.437971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.438147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.438327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.438499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.438679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.438854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.438974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.439001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.439156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.439182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.439357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.737 [2024-11-02 14:51:54.439384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.737 qpair failed and we were unable to recover it. 00:36:02.737 [2024-11-02 14:51:54.439511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.439537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.439697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.439723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.439878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.439904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.440949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.440976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.441130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.441155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.441323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.441350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.441524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.441550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.441670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.441696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.441874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.441900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.442924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.442950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.443960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.443986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.444142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.444170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.444347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.444374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.444501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.444527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.444680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.444706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.444888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.444914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.445916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.445942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.446114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.738 [2024-11-02 14:51:54.446140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.738 qpair failed and we were unable to recover it. 00:36:02.738 [2024-11-02 14:51:54.446308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.446336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.446486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.446512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.446673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.446699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.446825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.446852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.446980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.447185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.447356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.447533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.447729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.447874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.447900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.448904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.448931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.449971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.449997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.450948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.450986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.451145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.451173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.451295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.451332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.451487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.451512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.451668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.451694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.451868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.451895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.452069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.452094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.452216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.452243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.452413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.452439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.452626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.452651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.452804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.452829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.739 qpair failed and we were unable to recover it. 00:36:02.739 [2024-11-02 14:51:54.453003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.739 [2024-11-02 14:51:54.453038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.453183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.453209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.453374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.453400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.453548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.453575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.453691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.453716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.453865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.453890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.454946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.454971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.455099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.455123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.455280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.455310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.455441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.455467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.455625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.455651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.455825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.455851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.456915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.456941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.457108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.457134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.457282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.457312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.457467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.457492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.457625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.457652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.457830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.457855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.458003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.458028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.458174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.458200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.740 [2024-11-02 14:51:54.458364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.740 [2024-11-02 14:51:54.458389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.740 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.458538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.458562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.458719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.458745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.458874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.458899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.459967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.459993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.460168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.460193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.460347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.460373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.460502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.460528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.460682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.460709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.460830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.460855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.461893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.461920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.462962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.462988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.463136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.463161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.463341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.463368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.463484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.463509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.463696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.463722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.463876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.463908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.464959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.464985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.465107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.741 [2024-11-02 14:51:54.465131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.741 qpair failed and we were unable to recover it. 00:36:02.741 [2024-11-02 14:51:54.465260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.465286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.465406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.465431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.465580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.465606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.465780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.465804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.465956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.465982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.466924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.466953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.467966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.467991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.468174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.468199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.468388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.468413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.468545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.468570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.468719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.468746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.468894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.468919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.469879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.469903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.470941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.470965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.471149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.471176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.471311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.471338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.471459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.471485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.471646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.742 [2024-11-02 14:51:54.471671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.742 qpair failed and we were unable to recover it. 00:36:02.742 [2024-11-02 14:51:54.471795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.471820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.471942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.471968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.472917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.472942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.473945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.473970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.474952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.474977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.475159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.475343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.475497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.475679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.475851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.475999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.476851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.476993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.477193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.477375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.477568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.477776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.477922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.477951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.478128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.478153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.743 [2024-11-02 14:51:54.478299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.743 [2024-11-02 14:51:54.478325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.743 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.478478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.478504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.478681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.478706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.478834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.478859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.479866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.479893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.480945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.480971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.481941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.481967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.482954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.482980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.483950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.483978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.484151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.484176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.484324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.484350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.484502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.484533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.484662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.484689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.484808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.744 [2024-11-02 14:51:54.484833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.744 qpair failed and we were unable to recover it. 00:36:02.744 [2024-11-02 14:51:54.485013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.485161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.485322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.485497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.485698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.485874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.485901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.486079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.486224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.486436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.486613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.486805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.486993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.487142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.487324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.487477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.487658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.487832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.487857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.488910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.488936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.489969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.489995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.490172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.490349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.490495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.490651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.745 [2024-11-02 14:51:54.490792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.745 qpair failed and we were unable to recover it. 00:36:02.745 [2024-11-02 14:51:54.490953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.490979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.491101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.491126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.491252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.491284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.491436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.491465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.491627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.491653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.491830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.491855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.492911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.492937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.493938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.493964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.494940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.494966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.495946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.495971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.496972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.496996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.497128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.497153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.497301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.746 [2024-11-02 14:51:54.497327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.746 qpair failed and we were unable to recover it. 00:36:02.746 [2024-11-02 14:51:54.497473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.497497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.497634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.497659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.497810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.497836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.498855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.498978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.499949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.499975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.500148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.500345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.500494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.500670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.500843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.500997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.501177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.501359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.501534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.501734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.501888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.501914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.502878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.502903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.747 qpair failed and we were unable to recover it. 00:36:02.747 [2024-11-02 14:51:54.503891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.747 [2024-11-02 14:51:54.503917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.504095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.504290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.504479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.504652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.504829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.504978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.505172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.505370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.505518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.505667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.505847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.505874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.506858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.506883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.507835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.507861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.508863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.508888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.509851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.509875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.510028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.510052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.510197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.510223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.748 [2024-11-02 14:51:54.510346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.748 [2024-11-02 14:51:54.510372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.748 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.510524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.510550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.510664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.510688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.510840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.510865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.511842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.511872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.512889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.512914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.513090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.513115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.513290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.513316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.513445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.513471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.513621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.513647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.513819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.513845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.514926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.514951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.515965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.515990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.516153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.516178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.749 [2024-11-02 14:51:54.516327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.749 [2024-11-02 14:51:54.516353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.749 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.516502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.516527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.516660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.516686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.516834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.516859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.517932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.517957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.518110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.518299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.518502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.518642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.518843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.518997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.519175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.519375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.519525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.519684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.519880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.519905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.520938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.520963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.521137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.521341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.521529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.521702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.521874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.521996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.522852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.522977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.750 [2024-11-02 14:51:54.523002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.750 qpair failed and we were unable to recover it. 00:36:02.750 [2024-11-02 14:51:54.523156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.523181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.523329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.523356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.523478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.523503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.523662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.523688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.523835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.523859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.523976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.524820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.524977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.525155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.525331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.525479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.525649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.525851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.525880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.526910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.526937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.527952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.527977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.528178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.528373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.528538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.528689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.528836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.528983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.529007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.529180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.529205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.529330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.529357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.751 [2024-11-02 14:51:54.529487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.751 [2024-11-02 14:51:54.529512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.751 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.529663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.529689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.529837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.529861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.529984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.530134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.530305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.530520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.530670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.530843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.530867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.531829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.531854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.532868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.532892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.533876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.533900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.534916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.534941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.535921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.752 [2024-11-02 14:51:54.535946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.752 qpair failed and we were unable to recover it. 00:36:02.752 [2024-11-02 14:51:54.536126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.536152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.536310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.536337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.536484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.536509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.536631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.536656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.536806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.536831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.536975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.537974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.537999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.538948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.538974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.539152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.539334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.539515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.539670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.539843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.539989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.540966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.540992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.541728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.541759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.541922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.541950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.542105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.542132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.542311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.542338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.542485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.542511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.542655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.542681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.542804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.542829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.543004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.543029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.753 [2024-11-02 14:51:54.543185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.753 [2024-11-02 14:51:54.543212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.753 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.543347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.543374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.543530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.543555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.543681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.543707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.543835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.543860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.544930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.544955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.545109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.545135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.545276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.545313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.545468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.545495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.545676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.545702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.545860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.545885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.546853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.546972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.547003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.547898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.547929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.548906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.548933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.549063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.549088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.549234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.549267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.549416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.549442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.549597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.549622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.754 [2024-11-02 14:51:54.549786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.754 [2024-11-02 14:51:54.549823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.754 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.549979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.550140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.550313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.550465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.550611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.550807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.550832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.551838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.551865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.552952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.552977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.553127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.553152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.553310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.553337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.553467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.553493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.553673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.553699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.553847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.553872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.554886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.554911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.555935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.555960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.556109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.556135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.556284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.556310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.556480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.755 [2024-11-02 14:51:54.556505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.755 qpair failed and we were unable to recover it. 00:36:02.755 [2024-11-02 14:51:54.556651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.556675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.556803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.556829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.556951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.556977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.557155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.557332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.557472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.557676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.557850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.557974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.558803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.558979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.559129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.559307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.559492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.559667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.559844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.559869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.560863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.560889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.561878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.561904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.562904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.756 [2024-11-02 14:51:54.562929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.756 qpair failed and we were unable to recover it. 00:36:02.756 [2024-11-02 14:51:54.563079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.563225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.563408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.563610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.563783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.563930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.563955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.564081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.564106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.564267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.564294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.564440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.564465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.564617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.564642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.564816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.564841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.565877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.565902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.566874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.566899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.567874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.567998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.568164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.568321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.568473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.568647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.568833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.568860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.569007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.569039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.569186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.569211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.569337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.569363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.757 [2024-11-02 14:51:54.569536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.757 [2024-11-02 14:51:54.569562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.757 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.569713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.569740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.569889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.569915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.570962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.570987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.571155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.571354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.571521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.571674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.571830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.571974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.572927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.572956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.573972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.573997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.574970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.574995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.575123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.575149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.758 [2024-11-02 14:51:54.575303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.758 [2024-11-02 14:51:54.575331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.758 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.575503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.575527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.575653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.575677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.575802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.575829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.575974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.575999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.576900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.576927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.577952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.577978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.578126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.578308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.578487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.578690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.578869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.578988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.579166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.579338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.579513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.579662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.579866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.579892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.580099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.580283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.580461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.580611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.580814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.580992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.759 [2024-11-02 14:51:54.581863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.759 qpair failed and we were unable to recover it. 00:36:02.759 [2024-11-02 14:51:54.581981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.582140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.582331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.582481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.582662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.582861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.582885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.583904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.583929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.584929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.584954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.585124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.585300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.585473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.585651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.585827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.585993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.586194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.586365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.586509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.586659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.586837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.586863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.587891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.587916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.588068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.588092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.588250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.588283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.588431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.760 [2024-11-02 14:51:54.588456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.760 qpair failed and we were unable to recover it. 00:36:02.760 [2024-11-02 14:51:54.588605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.588630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.588777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.588803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.588980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.589163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.589340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.589487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.589636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.589836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.589861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.590866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.590891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.591965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.591991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.592942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.592968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.593158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.593310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.593483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.593645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.593825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.593975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.594902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.594928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.761 [2024-11-02 14:51:54.595105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.761 [2024-11-02 14:51:54.595130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.761 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.595284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.595311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.595437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.595461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.595617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.595642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.595759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.595784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.595919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.595944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.596124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.596149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.596324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.596351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.596502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.596528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.596705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.596731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.596870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.596895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.597910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.597935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.598964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.598990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.599879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.599904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.600963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.600987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.601138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.601164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.601290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.601315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.601493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.601518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.762 [2024-11-02 14:51:54.601669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.762 [2024-11-02 14:51:54.601695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.762 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.601875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.601900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.602964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.602991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.603886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.603910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.604962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.604988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.605966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.605992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.606127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.606153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.606304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.606330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.606461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.606487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.606662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.606686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.606847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.606875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.607049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.607074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.607198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.607223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.607357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.607387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.607543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.763 [2024-11-02 14:51:54.607569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.763 qpair failed and we were unable to recover it. 00:36:02.763 [2024-11-02 14:51:54.607745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.607771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.607921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.607946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.608074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.608100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.608229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.608264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.608435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.608460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.608638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.608663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.608811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.608837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.609897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.609923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.610865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.610890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.611902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.611927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.612918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.612942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.613949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.613974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.614093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.764 [2024-11-02 14:51:54.614120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.764 qpair failed and we were unable to recover it. 00:36:02.764 [2024-11-02 14:51:54.614242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.614280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.614430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.614456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.614607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.614632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.614758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.614783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.614935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.614960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.615943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.615968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.616116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.616305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.616478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.616660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.616833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.616977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.617159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.617362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.617509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.617706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.617883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.617908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.618953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.618978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.619130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.619154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.619284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.619311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.619460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.619487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.619638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.619664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.619847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.619872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.765 qpair failed and we were unable to recover it. 00:36:02.765 [2024-11-02 14:51:54.620974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.765 [2024-11-02 14:51:54.620999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.621147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.621173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.621300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.621332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.621485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.621512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.621658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.621685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.621835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.621861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.622883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.622911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.623906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.623931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.624916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.624942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.625144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.625306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.625480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.625655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.625831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.625987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.626840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.626996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.627023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.627174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.627198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.627353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.627379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.766 [2024-11-02 14:51:54.627557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.766 [2024-11-02 14:51:54.627583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.766 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.627731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.627756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.627881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.627906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.628870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.628897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.629047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.629074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.629224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.629249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.629447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.629472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.629624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.629650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.629803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.629829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.630891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.630917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.631838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.631864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.632969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.632995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.633116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.633143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.633324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.633350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.767 [2024-11-02 14:51:54.633495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.767 [2024-11-02 14:51:54.633521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.767 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.633665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.633691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.633850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.633874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.634946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.634973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.635925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.635952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.636083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.636111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.636286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.636313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.636464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.636489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.636640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.636666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.636819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.636845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.637900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.637926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.638117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.638295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.638500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.638687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.638839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.638997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.639848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.639995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.640025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.768 qpair failed and we were unable to recover it. 00:36:02.768 [2024-11-02 14:51:54.640145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.768 [2024-11-02 14:51:54.640172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.640326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.640353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.640528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.640554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.640695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.640720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.640842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.640868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.641946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.641972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.642971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.642997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.643956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.643983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.644132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.644281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.644487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.644667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.644849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.644976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.645131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.645305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.645476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.645648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.645829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.645855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.646001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.646026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.646175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.646201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.646350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.646377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.646529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.646555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.769 [2024-11-02 14:51:54.646678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.769 [2024-11-02 14:51:54.646704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.769 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.646873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.646901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.647935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.647960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.648117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.648141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.648301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.648328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.648470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.648495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.648672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.648698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.648871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.648897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.649855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.649881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.650870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.650995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.651168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.651367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.651571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.651771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.651946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.651971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.652896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.652921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.653084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.653110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.653267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.770 [2024-11-02 14:51:54.653293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.770 qpair failed and we were unable to recover it. 00:36:02.770 [2024-11-02 14:51:54.653422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.653448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.653586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.653611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.653757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.653786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.653932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.653956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.654924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.654950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.655114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.655285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.655468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.655647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.655831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.655979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.656969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.656995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.657147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.657173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.657322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.657348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.657525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.657550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.657697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.657721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.657845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.657872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.658895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.658921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.771 [2024-11-02 14:51:54.659923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.771 [2024-11-02 14:51:54.659949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.771 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.660127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.660153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.660309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.660335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.660489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.660515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.660686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.660716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.660842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.660868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.661838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.661864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.662904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.662929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.663096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.663124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.663297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.663324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.663447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.663471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.663593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.663618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.663796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.663823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.664861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.664887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.772 [2024-11-02 14:51:54.665852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.772 [2024-11-02 14:51:54.665878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.772 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.666911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.666936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.667111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.667135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.667312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.667337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.667504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.667530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.667651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.667681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.667829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.667855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.668872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.668897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.669932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.669956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.670130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.670156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.670328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.670355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.670506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.670532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.670677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.670703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.670817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.670844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.671842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.671867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.672012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.672037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.672160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.672185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.672347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.672374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.672551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.773 [2024-11-02 14:51:54.672578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.773 qpair failed and we were unable to recover it. 00:36:02.773 [2024-11-02 14:51:54.672710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.672736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.672853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.672878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.673963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.673989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.674939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.674966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.675135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.675294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.675495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.675687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.675859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.675985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.676132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.676311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.676460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.676648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.676829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.676853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.677879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.677906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.678943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.678968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.679088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.679114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.774 qpair failed and we were unable to recover it. 00:36:02.774 [2024-11-02 14:51:54.679240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.774 [2024-11-02 14:51:54.679273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.679447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.679471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.679625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.679652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.679804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.679830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.679957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.679982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.680140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.680165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.680325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.680353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.680473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.680498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.680676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.680702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.680852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.680877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.681940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.681965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.682152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.682327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.682497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.682647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.682823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.682975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.683149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.683328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.683526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.683676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.683828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.683854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.684900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.684924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.775 qpair failed and we were unable to recover it. 00:36:02.775 [2024-11-02 14:51:54.685971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.775 [2024-11-02 14:51:54.685996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.686947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.686972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.687105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.687132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.687267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.687293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.687465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.687490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.687664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.687690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.687861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.687887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.688950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.688976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.689933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.689958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.690080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.690105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.690247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.690282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.690428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.690454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.690598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.690624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.690802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.690828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.691862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.691888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.692047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.776 [2024-11-02 14:51:54.692073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.776 qpair failed and we were unable to recover it. 00:36:02.776 [2024-11-02 14:51:54.692205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.692230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.692413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.692439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.692567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.692593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.692723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.692749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.692876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.692902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.693942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.693968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.694960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.694987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.695137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.695162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.695339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.695366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.695485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.695517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.695694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.695721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.695864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.695891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.696894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.696920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.697079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.697253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.697455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.697657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.697841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.697996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.698022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.698196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.698222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.698352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.777 [2024-11-02 14:51:54.698379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.777 qpair failed and we were unable to recover it. 00:36:02.777 [2024-11-02 14:51:54.698532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.698559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.698707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.698734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.698877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.698903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.699878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.699904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.700075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.700100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.700286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.700327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.700508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.700536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.700695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.700723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.700902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.700928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.701941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.701967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.702907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.702932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.703954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.703980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.704896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.704930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.705057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.778 [2024-11-02 14:51:54.705082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.778 qpair failed and we were unable to recover it. 00:36:02.778 [2024-11-02 14:51:54.705227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.705253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.705417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.705443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.705568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.705594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.705721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.705746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.705912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.705938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.706951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.706978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.707118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.707143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.707314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.707341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.707519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.707545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.707696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.707722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.707872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.707898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.708884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.708909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.709927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.709952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.710929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.710954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.711102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.711252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.711434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.711595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.779 [2024-11-02 14:51:54.711797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.779 qpair failed and we were unable to recover it. 00:36:02.779 [2024-11-02 14:51:54.711933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.711958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.712943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.712968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.713142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.713327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.713522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.713678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.713826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.713990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.714138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.714325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.714500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.714676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.714875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.714902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.715893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.715919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.716091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.716268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.716447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.716653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.716828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.716979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.717174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.717352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.717549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.717693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.717871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.717897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.718031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.718057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.718213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.718238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.718367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.718392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.780 [2024-11-02 14:51:54.718546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.780 [2024-11-02 14:51:54.718572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.780 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.718715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.718741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.718890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.718914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.719889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.719914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.720945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.720972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.721959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.721985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.722957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.722982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.723941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.723967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.724093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.724118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.724267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.724293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.724448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.724475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.724651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.781 [2024-11-02 14:51:54.724676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.781 qpair failed and we were unable to recover it. 00:36:02.781 [2024-11-02 14:51:54.724798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.724823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.724971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.724997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.725168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.725193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.725318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.725345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.725496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.725522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.725673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.725698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.725847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.725873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.726833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.726982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.727154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.727383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.727561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.727712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.727910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.727936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.728906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.728932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.729079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.729104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.729249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.729281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.729434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.729460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.729589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.729614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.782 qpair failed and we were unable to recover it. 00:36:02.782 [2024-11-02 14:51:54.729732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.782 [2024-11-02 14:51:54.729756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.729899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.729924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.730900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.730925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.731892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.731917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.732930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.732956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.733968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.733993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.734161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.734186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.734359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.734385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.734537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.734563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.734690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.734715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.734868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.734894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.735933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.735959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.736138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.736282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.736432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.736629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.783 [2024-11-02 14:51:54.736801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.783 qpair failed and we were unable to recover it. 00:36:02.783 [2024-11-02 14:51:54.736953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.736978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.737129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.737155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.737301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.737327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.737448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.737474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.737633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.737659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.737834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.737860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.738916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.738943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.739125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.739305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.739478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.739648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.739827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.739971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.740852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.740974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.741176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.741346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.741520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.741690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.741894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.741919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.742959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.742984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.743130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.743320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.743514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.743684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.743834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.743983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.744009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.744164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.784 [2024-11-02 14:51:54.744191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.784 qpair failed and we were unable to recover it. 00:36:02.784 [2024-11-02 14:51:54.744342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.744368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.744483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.744508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.744663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.744689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.744845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.744871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.745879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.745905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.746052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.746078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.746252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.746289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.746416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.746442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.746609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.746634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.746824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.746849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.747915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.747941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.748936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.748962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.749132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.749156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.749304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.749330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.749489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.749516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.749694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.749720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.749859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.749884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.750900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.750925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.751075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.751100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.751251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.751282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.785 qpair failed and we were unable to recover it. 00:36:02.785 [2024-11-02 14:51:54.751432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.785 [2024-11-02 14:51:54.751458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.751620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.751645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.751798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.751823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.751998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.752182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.752339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.752519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.752692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.752842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.752868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.753877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.753903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.754840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.754866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.755888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.755914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.756917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.756943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.786 qpair failed and we were unable to recover it. 00:36:02.786 [2024-11-02 14:51:54.757939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.786 [2024-11-02 14:51:54.757965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.758114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.758140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.758288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.758314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.758435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.758461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.758636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.758662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.758835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.758861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.759887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.759913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.760871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.760896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.761974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.761999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.762970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.762995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.763151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.763177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.763316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.763343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.763512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.787 [2024-11-02 14:51:54.763539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:02.787 qpair failed and we were unable to recover it. 00:36:02.787 [2024-11-02 14:51:54.763688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.763717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.763866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.763891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.764905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.764931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.765913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.765953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.766968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.766995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.767148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.767178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.767365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.767393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.767549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.767577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.767726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.767756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.767912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.767941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.768116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.768298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.768516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.768668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.768844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.768985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.769184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.769376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.769555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.769736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.086 [2024-11-02 14:51:54.769905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.086 [2024-11-02 14:51:54.769931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.086 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.770926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.770952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.771131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.771158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.771333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.771360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.771519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.771545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.771725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.771755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.771908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.771937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.772948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.772975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.773132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.773159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.773327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.773365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.773489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.773516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.773669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.773696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.773845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.773871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.774907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.774933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.775114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.775323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.775483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.775651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.775835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.775984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.776011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.776139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.776165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.776291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.776320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.087 [2024-11-02 14:51:54.776499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.087 [2024-11-02 14:51:54.776527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.087 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.776652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.776677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.776838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.776864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.777972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.777998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.778161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.778188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.778326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.778354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.778509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.778533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.778743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.778770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.778938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.778967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.779114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.779140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.779290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.779317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.779467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.779493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.779645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.779671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.779847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.779874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.780887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.780914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.781962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.781989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.782140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.782165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.782322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.782349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.782558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.782585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.782762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.782788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.782938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.782963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.783122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.783147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.783277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.783305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.783429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.088 [2024-11-02 14:51:54.783453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.088 qpair failed and we were unable to recover it. 00:36:03.088 [2024-11-02 14:51:54.783585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.783610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.783760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.783786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.783954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.783979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.784130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.784320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.784497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.784667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.784853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.784978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.785193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.785374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.785536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.785710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.785926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.785952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.786914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.786939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.787925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.787951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.788967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.788993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.789146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.789172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.789323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.789350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.789501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.789526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.789701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.789727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.789845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.789872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.089 [2024-11-02 14:51:54.790020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.089 [2024-11-02 14:51:54.790047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.089 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.790170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.790196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.790329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.790356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.790474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.790501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.790684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.790710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.790862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.790888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.791939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.791965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.792939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.792965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.793110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.793136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.793280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.793306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.793458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.793484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.793612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.793638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.090 [2024-11-02 14:51:54.793788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.090 [2024-11-02 14:51:54.793814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.090 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.793936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.793962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.794895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.794921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.795896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.795921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.796932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.796958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.797118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.797144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.797308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.797335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.797494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.797520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.797668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.797694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.797811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.091 [2024-11-02 14:51:54.797838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.091 qpair failed and we were unable to recover it. 00:36:03.091 [2024-11-02 14:51:54.798011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.798183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.798363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.798544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.798692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.798911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.798938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.799946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.799971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.800147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.800173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.800342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.800369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.800511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.800536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.800686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.800711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.800861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.800887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.801940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.801966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.802076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.802102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.802244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.802284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.802464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.802489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.092 qpair failed and we were unable to recover it. 00:36:03.092 [2024-11-02 14:51:54.802641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.092 [2024-11-02 14:51:54.802665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.802814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.802845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.802995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.803194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.803403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.803548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.803723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.803873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.803899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.804903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.804928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.805072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.805098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.805252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.805285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.805465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.805491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.805667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.805692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.805842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.805867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.806869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.806895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.093 [2024-11-02 14:51:54.807020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.093 [2024-11-02 14:51:54.807045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.093 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.807198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.807224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.807386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.807412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.807561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.807586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.807737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.807763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.807911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.807936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.808087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.808114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.808270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.808298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.808473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.808499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.808645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.808671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.808823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.808850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.809864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.809891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.810064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.810093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.810214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.810241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.810428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.810455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.810629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.810655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.810803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.810829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.811014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.811040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.811190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.811215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.811348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.811375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.811507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.094 [2024-11-02 14:51:54.811538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.094 qpair failed and we were unable to recover it. 00:36:03.094 [2024-11-02 14:51:54.811714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.811740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.811885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.811912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.812943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.812970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.813942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.813967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.814085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.814111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.814278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.814305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.814459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.814485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.814633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.814658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.814848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.814873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.815943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.815968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.816115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.816142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.816299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.816325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.095 qpair failed and we were unable to recover it. 00:36:03.095 [2024-11-02 14:51:54.816493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.095 [2024-11-02 14:51:54.816518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.816675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.816702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.816840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.816866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.816992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.817202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.817419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.817620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.817764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.817913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.817939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.818922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.818948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.819147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.819307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.819484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.819682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.819852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.819974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.820165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.820366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.820519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.820687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.820863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.820888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.821064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.821100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.821289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.821316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.821469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.821494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.821675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-11-02 14:51:54.821701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-11-02 14:51:54.821875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.821901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.822931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.822957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.823949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.823976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.824129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.824163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.824324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.824351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.824505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.824531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.824678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.824704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.824851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.824877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.825931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.825957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.826129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.826294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.826444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.826602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-11-02 14:51:54.826801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-11-02 14:51:54.826950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.826976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.827131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.827156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.827305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.827332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.827521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.827546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.827697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.827723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.827867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.827892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.828852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.828993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.829166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.829315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.829495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.829644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.829791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.829818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.830844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.830870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.831856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.831881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.832917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.832944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.833114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.833138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.833288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.833313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-11-02 14:51:54.833431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-11-02 14:51:54.833456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.833620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.833646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.833818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.833842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.833958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.833983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.834150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.834175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.834349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.834375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.834498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.834523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.834664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.834689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.834850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.834875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.835879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.835904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.836942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.836968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.837093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.837118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.837269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.837296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.837445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.837471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.837657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.837684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.837850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.837878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.838893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.838918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.839926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.839952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.840096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-11-02 14:51:54.840121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-11-02 14:51:54.840249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.840282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.840403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.840431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.840581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.840606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.840730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.840755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.840907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.840933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.841942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.841967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.842970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.842996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.843172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.843357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.843536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.843681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.843856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.843983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.844186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.844390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.844566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.844747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.844930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.844957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.845107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.845140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.845318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.845345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.845500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.845527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.845640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.845665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.845798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.845835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-11-02 14:51:54.846951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-11-02 14:51:54.846977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.847134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.847176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.847347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.847388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.847553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.847582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.847743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.847771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.847925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.847954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.848106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.848137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.848273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.848303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.848471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.848496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.848677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.848706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.848862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.848889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.849890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.849917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.850093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.850121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.850278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.850307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.850461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.850488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.850669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.850698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.850821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.850850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.851874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.851900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.852924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.852951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.853125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.853272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.853454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.853603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-11-02 14:51:54.853808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-11-02 14:51:54.853930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.853956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.854144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.854176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.854296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.854323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.854471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.854496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.854628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.854655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.854828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.854859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.855889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.855915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.856933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.856964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.857936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.857961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.858951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.858978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.859140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.859165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.859302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.859328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.859478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.859504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-11-02 14:51:54.859650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-11-02 14:51:54.859676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.859819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.859844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.860929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.860954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.861077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.861103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.861251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.861304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.861467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.861492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.861623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.861649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.861831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.861857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.862843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.862991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.863847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.863982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.864189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.864377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.864559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.864734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.864929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.864954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.865132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.865158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.865308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.865334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.865499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.865525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.865668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.865694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.865836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.865862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.866043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.866068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.866217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.866242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.866383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.866409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-11-02 14:51:54.866566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-11-02 14:51:54.866593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.866738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.866764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.866891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.866916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.867956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.867983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.868117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.868143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.868300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.868327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.868481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.868507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.868652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.868678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.868831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.868863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.869850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.869877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.870897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.870923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.871910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.871936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.872948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.872974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.873116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-11-02 14:51:54.873141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-11-02 14:51:54.873282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.873309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.873426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.873457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.873611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.873637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.873785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.873811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.873957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.873983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.874896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.874922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.875889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.875916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.876884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.876909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.877872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.877998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.878143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.878336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.878567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.878750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.878930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.878956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.879138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.879164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.879291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.879318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-11-02 14:51:54.879463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-11-02 14:51:54.879490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.879639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.879666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.879814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.879840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.880890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.880916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.881071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.881276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.881484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.881659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.881804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.881966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.882879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.882997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.883844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.883994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.884019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.884173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-11-02 14:51:54.884199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-11-02 14:51:54.884315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.884343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.884461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.884487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.884640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.884665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.884780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.884806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.884931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.884957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.885941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.885966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.886952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.886977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.887139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.887170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.887316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.887344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.887495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.887521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.887650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.887675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.887850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.887876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.888888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.888914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.889087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.889286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.889484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.889665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.889840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.889990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.890162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.890340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.890491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.890697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-11-02 14:51:54.890858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-11-02 14:51:54.890884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.891846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.891872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.892907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.892933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.893109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.893286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.893467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.893666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.893868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.893986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.894188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.894356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.894511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.894686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.894840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.894866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.895880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.895906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.896104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.896286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.896462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.896662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.896843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.896997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-11-02 14:51:54.897024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-11-02 14:51:54.897188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.897213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.897402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.897429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.897557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.897583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.897732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.897758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.897886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.897912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.898931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.898956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.899136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.899313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.899491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.899695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.899845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.899992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.900147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.900324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.900487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.900688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.900838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.900864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.901930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.901957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.902912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.902938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.903053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.903078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-11-02 14:51:54.903231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-11-02 14:51:54.903266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.903408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.903434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.903549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.903575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.903697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.903723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.903882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.903908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.904107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.904289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.904460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.904637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.904839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.904993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.905189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.905385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.905535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.905703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.905899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.905925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.906961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.906986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.907893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.907918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.908921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.908946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.909093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.909119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.909268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.909294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.909444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-11-02 14:51:54.909471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-11-02 14:51:54.909621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.909648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.909805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.909831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.909977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.910182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.910355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.910508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.910659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.910859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.910885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.911099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.911250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.911437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.911642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.911842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.911988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.912133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.912289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.912472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.912645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.912800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.912826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.913920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.913946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.914071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.914097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.914248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.914282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.914405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.914431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.914583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.914608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-11-02 14:51:54.914735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-11-02 14:51:54.914763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.914910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.914935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.915931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.915956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.916915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.916940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.917948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.917979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.918156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.918328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.918485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.918683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.918890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.918998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.919149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.919324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.919469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.919668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-11-02 14:51:54.919822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-11-02 14:51:54.919848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.920962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.920988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.921139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.921165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.921336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.921362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.921479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.921504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.921681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.921706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.921855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.921882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.922880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.922905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.923057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.923083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.923231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.923273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.923431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.923457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-11-02 14:51:54.923612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-11-02 14:51:54.923638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.923789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.923814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.923969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.923995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.924961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.924986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.925163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.925193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.925316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.925343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.925468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.925494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.925644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.925670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.925840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.925866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.926922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.926948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.927093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.927118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.927233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.927266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.927417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.927444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-11-02 14:51:54.927628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-11-02 14:51:54.927654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.927827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.927852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.927975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.928149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.928332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.928505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.928681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.928851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.928877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.929915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.929945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.930898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.930923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.931933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.931958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.932110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-11-02 14:51:54.932137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-11-02 14:51:54.932286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.932313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.932463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.932489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.932608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.932633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.932777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.932803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.932941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.932967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.933961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.933988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.934164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.934190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.934341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.934367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.934546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.934572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.934710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.934736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.934910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.934936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.935852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.935878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.936079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.936289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.936433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.936587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-11-02 14:51:54.936727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-11-02 14:51:54.936893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.936923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.937102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.937286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.937461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.937635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.937839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.937990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.938952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.938977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.939950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.939976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.940964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.940990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.941136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.941162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-11-02 14:51:54.941289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-11-02 14:51:54.941316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.941493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.941523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.941674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.941700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.941848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.941874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.942875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.942901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.943941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.943967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-11-02 14:51:54.944933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-11-02 14:51:54.944958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.945962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.945986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.946113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.946139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.946325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.946351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.946497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.946523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.946698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.946723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.946873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.946898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.947868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.947893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.948919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.948946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.949101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.949127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.949304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.949331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.949456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.949481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.949610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-11-02 14:51:54.949636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-11-02 14:51:54.949813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.949838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.949968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.949993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.950189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.950346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.950499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.950669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.950836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.950995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.951194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.951360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.951557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.951701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.951872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.951897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.952874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.952899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.953947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.953972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.954123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.954148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-11-02 14:51:54.954288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-11-02 14:51:54.954314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.954469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.954494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.954642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.954668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.954788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.954813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.954964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.954989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.955139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.955165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.955341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.955367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.955541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.955566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.955720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.955746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.955883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.955908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.956917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.956942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.957907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.957933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.958112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.958138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.958286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.958312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.958456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.958481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.958607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.958634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-11-02 14:51:54.958808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-11-02 14:51:54.958833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.958954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.958979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.959155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.959312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.959492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.959668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.959841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.959991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.960849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.960976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.961923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.961949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.962952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-11-02 14:51:54.962978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-11-02 14:51:54.963106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.963248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.963428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.963598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.963743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.963896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.963923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.964101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.964302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.964482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.964655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.964828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.964982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.965963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.965988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.966952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.966977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.967098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.967123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.967276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.967303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-11-02 14:51:54.967412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-11-02 14:51:54.967438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.967554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.967579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.967703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.967728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.967907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.967933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.968928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.968953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.969133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.969342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.969518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.969697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.969864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.969988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.970949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.970974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.971928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.971954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-11-02 14:51:54.972103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-11-02 14:51:54.972128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.972300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.972325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.972469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.972494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.972670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.972696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.972842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.972867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.973973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.973998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.974955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.974980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.975132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.975157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.975331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.975357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.975502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.975527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.975677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.975702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-11-02 14:51:54.975849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-11-02 14:51:54.975874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.976903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.976929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.977885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.977909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.978873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.978898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.979908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.979933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.980087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.980112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.980234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.980267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.980391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.980417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-11-02 14:51:54.980562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-11-02 14:51:54.980588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.980738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.980763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.980942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.980967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.981139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.981345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.981497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.981666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.981842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.981988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.982969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.982995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.983972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.983997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.984957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.984982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.985128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-11-02 14:51:54.985153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-11-02 14:51:54.985304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.985330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.985480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.985507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.985652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.985677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.985820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.985845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.985961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.985987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.986118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.986143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.986313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.986339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.986482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.986507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.986629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.986654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.986833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.986858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.987932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.987958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.988887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.988913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.989920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.989946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.990066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.990097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-11-02 14:51:54.990275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-11-02 14:51:54.990301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.990451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.990477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.990630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.990655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.990776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.990803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.990958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.990984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.991157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.991331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.991504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.991677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.991875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.991996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.992170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.992345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.992528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.992700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.992884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.992910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.993925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.993950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.994885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.994911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.995031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.995057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.995167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.995192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-11-02 14:51:54.995366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-11-02 14:51:54.995392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.995518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.995543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.995689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.995715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.995881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.995907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.996909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.996934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.997891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.997916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.998906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.998932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:54.999919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:54.999944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.000910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.000936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-11-02 14:51:55.001114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-11-02 14:51:55.001140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.001293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.001319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.001474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.001499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.001647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.001676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.001793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.001818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.001945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.001970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.002909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.002935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.003965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.003990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.004964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.004990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.005171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.005348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.005488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.005637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.005836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.005981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.006158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.006308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.006498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.006689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.006883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.006909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.007050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.007075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-11-02 14:51:55.007233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-11-02 14:51:55.007264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.007375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.007401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.007547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.007572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.007724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.007749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.007902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.007929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.008954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.008979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.009158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.009330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.009470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.009646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.009842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.009990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.010866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.010999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.011178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.011356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.011527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.011703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.011881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.011906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.012891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.012917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.013941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.013966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-11-02 14:51:55.014939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-11-02 14:51:55.014964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.015963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.015988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.016930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.016955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.017134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.017307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.017489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.017663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.017838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.017986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.018962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.018988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.019157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.019337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.019516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.019689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.019867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.019979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.020131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.020295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.020452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.020644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.020848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.020873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.021874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.021900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.022022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.022049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.022197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.022223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.022356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.022383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.133 [2024-11-02 14:51:55.022513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.133 [2024-11-02 14:51:55.022540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.133 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.022703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.022728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.022876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.022902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.023849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.023999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.024145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.024322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.024521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.024687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.024871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.024897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.025868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.025895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.026910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.026935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.027938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.027964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.028109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.028135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.028286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.028313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.028460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.028486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.028657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.028682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.028834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.028860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.029960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.029985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.030098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.134 [2024-11-02 14:51:55.030124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.134 qpair failed and we were unable to recover it. 00:36:03.134 [2024-11-02 14:51:55.030267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.030293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.030470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.030495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.030619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.030645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.030790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.030815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.030929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.030955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.031934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.031960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.032936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.032962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.033111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.033266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.033447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.033646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.033820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.033978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.034151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.034328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.034533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.034684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.034858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.034883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.035899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.035924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.036067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.036245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.036460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.036604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.135 [2024-11-02 14:51:55.036752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.135 qpair failed and we were unable to recover it. 00:36:03.135 [2024-11-02 14:51:55.036870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.036896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.037910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.037936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.038918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.038944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.039947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.039973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.040885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.040911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.041919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.041944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.042917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.042943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.043921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.043946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.136 qpair failed and we were unable to recover it. 00:36:03.136 [2024-11-02 14:51:55.044960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.136 [2024-11-02 14:51:55.044986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.045942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.045968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.046932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.046958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.047112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.047312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.047514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.047665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.047842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.047985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.048822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.048976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.049178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.049327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.049494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.049664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.049845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.049870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.050867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.050893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.051852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.051878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.052905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.052932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.053055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.137 [2024-11-02 14:51:55.053082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.137 qpair failed and we were unable to recover it. 00:36:03.137 [2024-11-02 14:51:55.053233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.053275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.053402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.053429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.053559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.053585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.053733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.053759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.053913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.053939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.054948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.054974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.055126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.055151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.055295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.055323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.055473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.055499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.055643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.055669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.055800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.055826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.056898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.056924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.057958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.057983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.058929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.058954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.059936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.059962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.060088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.060115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.060276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.138 [2024-11-02 14:51:55.060303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.138 qpair failed and we were unable to recover it. 00:36:03.138 [2024-11-02 14:51:55.060451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.060478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.060624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.060651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.060828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.060853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.060994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.061169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.061346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.061528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.061730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.061912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.061938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.062894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.062920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.063068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.063280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.063438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.063617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.063815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.063976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.064160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.064353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.064532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.064707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.064884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.064909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.065936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.139 [2024-11-02 14:51:55.065961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.139 qpair failed and we were unable to recover it. 00:36:03.139 [2024-11-02 14:51:55.066107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.066132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.066265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.066292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.066448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.066477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.066625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.066651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.066813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.066839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.067887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.067914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.068932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.068957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.069937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.069964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.070117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.070142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.070315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.070342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.070512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.070538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.070691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.070717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.070866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.070892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.071908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.071935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.072083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.072111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.072277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.072304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.072454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.072481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.072639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.072665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.140 [2024-11-02 14:51:55.072786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.140 [2024-11-02 14:51:55.072818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.140 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.072971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.072996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.073176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.073354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.073528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.073704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.073857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.073987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.074156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.074333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.074509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.074684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.074854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.074879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.075836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.075862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.076081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.076254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.076460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.076634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.076819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.076992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.077850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.077999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.078928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.078953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.079109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.079134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.079269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.141 [2024-11-02 14:51:55.079296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.141 qpair failed and we were unable to recover it. 00:36:03.141 [2024-11-02 14:51:55.079445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.079471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.079596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.079623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.079769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.079794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.079966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.079992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.080140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.080167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.080319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.080345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.080491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.080518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.080665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.080691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.080842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.080867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.081842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.081869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.082849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.082876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.083969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.083995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.084940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.084966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.085952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.142 [2024-11-02 14:51:55.085978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.142 qpair failed and we were unable to recover it. 00:36:03.142 [2024-11-02 14:51:55.086102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.086130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.086290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.086316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.086461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.086486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.086658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.086683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.086807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.086833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.086984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.087166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.087346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.087548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.087735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.087921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.087947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.088970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.088995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.089145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.089172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.089319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.089345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.089498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.089525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.089644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.089669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.089846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.089872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.090896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.090922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.091042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.091069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.091224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.091250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.091406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.091432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.091576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.091601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.143 qpair failed and we were unable to recover it. 00:36:03.143 [2024-11-02 14:51:55.091754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.143 [2024-11-02 14:51:55.091779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.091930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.091956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.092132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.092158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.092309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.092336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.092490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.092516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.092672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.092699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.092845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.092870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.093864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.093889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.094903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.094929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.095911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.095937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.096115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.096142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.096297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.096323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.096467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.096493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.096668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.096695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.096845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.096879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.097847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.097874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.098029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.098055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.098210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.098242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.098388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.144 [2024-11-02 14:51:55.098416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.144 qpair failed and we were unable to recover it. 00:36:03.144 [2024-11-02 14:51:55.098543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.098569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.098718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.098745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.098863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.098889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.099084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.099293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.099454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.099649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.099832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.099996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.100821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.100975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.101126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.101303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.101505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.101680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.101861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.101893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.102870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.102896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.103870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.103998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.104179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.104331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.104483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.104644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.104787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.104812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.105043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.145 [2024-11-02 14:51:55.105070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.145 qpair failed and we were unable to recover it. 00:36:03.145 [2024-11-02 14:51:55.105192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.105218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.105399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.105427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.105551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.105577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.105733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.105761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.105907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.105933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.106089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.106125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.106280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.106306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.106538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.106565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.106697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.106723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.106868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.106904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.107891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.107916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.108084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.108294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.108443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.108597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.108813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.108974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.109004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.146 [2024-11-02 14:51:55.109133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.146 [2024-11-02 14:51:55.109160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.146 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.109309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.109335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.109490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.109516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.109651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.109677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.109806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.109831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.109980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.110205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.110381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.110539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.110710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.110901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.110927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.111100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.111297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.111456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.111618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.111829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.111982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.112872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.112994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.113157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.113356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.113533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.113683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.113862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.113889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.114017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.114045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.114174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.114211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.114380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.114406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.432 qpair failed and we were unable to recover it. 00:36:03.432 [2024-11-02 14:51:55.114575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.432 [2024-11-02 14:51:55.114609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.114735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.114760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.114910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.114936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.115115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.115316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.115495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.115675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.115834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.115996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.116819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.116989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.117966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.117992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.118193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.118341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.118491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.118657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.118833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.118983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.119162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.119352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.119521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.119689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.119895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.119920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.120914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.120940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.433 [2024-11-02 14:51:55.121087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.433 [2024-11-02 14:51:55.121113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.433 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.121269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.121295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.121439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.121464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.121621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.121647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.121799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.121824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.121952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.121978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.122959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.122984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.123162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.123342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.123520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.123699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.123871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.123992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.124193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.124377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.124532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.124687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.124863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.124889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.125933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.125958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.126105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.126130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.126304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.126331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.126507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.126532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.126683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.126709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.126855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.126881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.434 [2024-11-02 14:51:55.127884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.434 [2024-11-02 14:51:55.127910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.434 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.128912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.128938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.129887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.129913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.130938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.130964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.131141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.131303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.131476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.131680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.131821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.131983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.132183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.132363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.132510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.132670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.132849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.132874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.133922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.133948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.134096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.134121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.134299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.134326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.134479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.134504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.435 [2024-11-02 14:51:55.134652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.435 [2024-11-02 14:51:55.134677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.435 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.134786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.134812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.134945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.134970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.135889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.135915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.136936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.136963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.137938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.137964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.138935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.138961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.139959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.139985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.436 qpair failed and we were unable to recover it. 00:36:03.436 [2024-11-02 14:51:55.140110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.436 [2024-11-02 14:51:55.140136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.140291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.140317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.140441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.140466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.140633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.140658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.140836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.140862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.141959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.141984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.142123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.142148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.142321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.142347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.142471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.142498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.142647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.142673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.142823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.142856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.143908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.143934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.144865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.144891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.145866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.145989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.146016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.146163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.146188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.146344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.146371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.146502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.437 [2024-11-02 14:51:55.146528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.437 qpair failed and we were unable to recover it. 00:36:03.437 [2024-11-02 14:51:55.146646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.146673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.146836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.146862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.146988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.147190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.147399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.147575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.147729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.147878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.147905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.148899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.148924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.149953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.149979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.150928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.150956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.151953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.151979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.152161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.152187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.152337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.152363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.152482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.152508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.152644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.152670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.152809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.152835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.153011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.153036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.153211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.438 [2024-11-02 14:51:55.153237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.438 qpair failed and we were unable to recover it. 00:36:03.438 [2024-11-02 14:51:55.153370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.153396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.153543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.153570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.153717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.153743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.153917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.153943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.154953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.154980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.155896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.155922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.156910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.156936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.157874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.157899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.158858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.158884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.159064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.159242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.159452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.159627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.439 [2024-11-02 14:51:55.159814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.439 qpair failed and we were unable to recover it. 00:36:03.439 [2024-11-02 14:51:55.159990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.160933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.160958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.161183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.161208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.161358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.161384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.161498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.161523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.161666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.161693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.161847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.161872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.162962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.162988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.163920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.163945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.164170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.164196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.164319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.164346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.164501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.164526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.164673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.164697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.164841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.164868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.165846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.165991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.166017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.166135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.166160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.166387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.166414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.440 [2024-11-02 14:51:55.166541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.440 [2024-11-02 14:51:55.166567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.440 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.166713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.166739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.166963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.166988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.167141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.167297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.167472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.167672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.167845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.167997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.168963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.168989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.169134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.169160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.169333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.169359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.169507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.169533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.169686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.169712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.169837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.169863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.170972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.170997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.171126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.171151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.171327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.171353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.171470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.171495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.171616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.171641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.171814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.171839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.172021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.172046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.172216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.172241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.441 [2024-11-02 14:51:55.172424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.441 [2024-11-02 14:51:55.172450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.441 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.172598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.172624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.172746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.172771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.172920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.172945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.173862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.173887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.174889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.174914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.175067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.175092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.175246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.175285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.175408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.175433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.175658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.175684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.175829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.175855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.176892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.176917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.177923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.177948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.178946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.178975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.442 qpair failed and we were unable to recover it. 00:36:03.442 [2024-11-02 14:51:55.179124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.442 [2024-11-02 14:51:55.179150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.179271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.179298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.179457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.179483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.179602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.179628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.179769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.179795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.179911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.179936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.180918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.180944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.181968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.181993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.182140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.182166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.182343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.182369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.182486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.182511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.182664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.182689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.182859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.182885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.183920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.183946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.184912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.184937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.185054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.185080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.185252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.185285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.185437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.185463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.443 [2024-11-02 14:51:55.185595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.443 [2024-11-02 14:51:55.185621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.443 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.185766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.185791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.185967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.185996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.186149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.186329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.186477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.186662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.186828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.186976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.187160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.187361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.187508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.187680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.187859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.187885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.188962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.188987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.189133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.189325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.189503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.189680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.189825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.189984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.190129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.190302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.190495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.190692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.190862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.190888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.191962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.191988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.192147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.192172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.444 [2024-11-02 14:51:55.192326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.444 [2024-11-02 14:51:55.192353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.444 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.192501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.192527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.192647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.192674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.192800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.192827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.192957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.192984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.193107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.193133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.193287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.193313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.193460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.193486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.193661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.193687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.193837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.193862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.194868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.194893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.195950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.195975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.196940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.196966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.197130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.197156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.197300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.197326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.197480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.197505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.197677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.197707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.197880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.197906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.198050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.198076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.198248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.198280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.445 qpair failed and we were unable to recover it. 00:36:03.445 [2024-11-02 14:51:55.198452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.445 [2024-11-02 14:51:55.198477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.198600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.198625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.198770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.198795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.198934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.198959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.199944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.199970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.200935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.200960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.201134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.201159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.201312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.201338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.201489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.201514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.201687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.201712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.201838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.201864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.202882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.202908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.203106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.203305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.203455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.203624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.203826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.203982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.204158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.204329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.204502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.204660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.204858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.204901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.205065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.446 [2024-11-02 14:51:55.205092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.446 qpair failed and we were unable to recover it. 00:36:03.446 [2024-11-02 14:51:55.205244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.205280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.205410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.205437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.205611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.205638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.205787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.205814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.205972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.205998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.206172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.206198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.206340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.206367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.206518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.206543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.206691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.206717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.206870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.206896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.207916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.207942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.208931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.208956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.209946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.209971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.210916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.210942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.211087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.211289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.211438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.211590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.447 [2024-11-02 14:51:55.211760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.447 qpair failed and we were unable to recover it. 00:36:03.447 [2024-11-02 14:51:55.211891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.211917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.212088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.212243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.212488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.212673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.212861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.212983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.213189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.213374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.213557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.213733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.213906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.213933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.214926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.214954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.215905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.215932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.216936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.216963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.217894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.217920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.218047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.218073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.218227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.448 [2024-11-02 14:51:55.218253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.448 qpair failed and we were unable to recover it. 00:36:03.448 [2024-11-02 14:51:55.218431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.218456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.218604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.218630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.218810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.218837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.218955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.218982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.219127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.219154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.219306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.219333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.219488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.219515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.219636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.219662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.219790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.219817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.220819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.220846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.221894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.221920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.222905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.222932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.223950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.223976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.224103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.224130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.224304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.224332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.224477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.224503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.224677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.224704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.224853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.224881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.449 [2024-11-02 14:51:55.225030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.449 [2024-11-02 14:51:55.225057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.449 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.225203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.225229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.225366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.225393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.225511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.225537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.225692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.225719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.225893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.225919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.226808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.226835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.227846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.227996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.228831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.228981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.229157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.229314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.229496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.229696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.229873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.229899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.230018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.230044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.230195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.230222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.230377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.230404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.230577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.230604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.450 qpair failed and we were unable to recover it. 00:36:03.450 [2024-11-02 14:51:55.230749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.450 [2024-11-02 14:51:55.230775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.230929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.230955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.231107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.231286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.231466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.231664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.231865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.231987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.232156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.232326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.232532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.232710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.232889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.232915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.233906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.233933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.234944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.234971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.235100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.235126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.235300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.235328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.235480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.235506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.235652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.235679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.235837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.235863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.236971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.236997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.237145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.237172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.451 [2024-11-02 14:51:55.237322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.451 [2024-11-02 14:51:55.237349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.451 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.237465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.237492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.237662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.237689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.237844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.237870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.237991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.238841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.238981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.239122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.239287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.239468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.239643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.239824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.239850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.240828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.240855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.241940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.241966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.242169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.242321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.242523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.242671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.242871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.242995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.243020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.243150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.243177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.243330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.243357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.243514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.243541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.243669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.452 [2024-11-02 14:51:55.243696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.452 qpair failed and we were unable to recover it. 00:36:03.452 [2024-11-02 14:51:55.243877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.243903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.244859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.244886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.245841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.245974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.246948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.246974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.247953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.247980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.248957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.248984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.249184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.249363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.249514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.249666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.249815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.249983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.250009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.453 qpair failed and we were unable to recover it. 00:36:03.453 [2024-11-02 14:51:55.250154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.453 [2024-11-02 14:51:55.250181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.250321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.250349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.250476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.250502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.250649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.250675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.250858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.250884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.251825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.251976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.252131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.252318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.252488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.252661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.252890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.252918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.253923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.253951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.254124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.254300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.254450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.254647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.254826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.254980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.255160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.255339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.255520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.255701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.255846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.255874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.454 [2024-11-02 14:51:55.256025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.454 [2024-11-02 14:51:55.256051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.454 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.256176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.256202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.256325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.256352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.256504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.256531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.256677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.256705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.256857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.256883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.257849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.257874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.258848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.258875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.259923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.259950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.260908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.260936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.261955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.261981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.262107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.262133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.262292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.455 [2024-11-02 14:51:55.262320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.455 qpair failed and we were unable to recover it. 00:36:03.455 [2024-11-02 14:51:55.262436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.262461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.262590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.262618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.262773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.262802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.262918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.262945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.263894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.263920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.264106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.264302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.264483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.264632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.264835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.264989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.265188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.265350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.265510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.265679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.265831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.265858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.266860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.266886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.267946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.267972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.268149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.268302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.268508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.268709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.456 [2024-11-02 14:51:55.268860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.456 qpair failed and we were unable to recover it. 00:36:03.456 [2024-11-02 14:51:55.268989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.269193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.269359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.269533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.269710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.269893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.269919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.270958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.270987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.271953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.271978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.272150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.272177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.272330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.272358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.272490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.272516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.272701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.272728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.272851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.272877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.273942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.273970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.274959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.274984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.275115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.275141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.275266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.275294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.275457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.275488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.275664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.457 [2024-11-02 14:51:55.275698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.457 qpair failed and we were unable to recover it. 00:36:03.457 [2024-11-02 14:51:55.275845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.275872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.276866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.276987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.277189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.277353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.277505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.277680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.277860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.277887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.278075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.278298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.278503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.278704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.278850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.278978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.279152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.279339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.279511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.279668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.279853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.279880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.280861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.280981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.281191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.281371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.281523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.281667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.281839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.281865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.282014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.282041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.282167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.282202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.458 qpair failed and we were unable to recover it. 00:36:03.458 [2024-11-02 14:51:55.282350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.458 [2024-11-02 14:51:55.282381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.282510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.282537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.282682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.282709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.282862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.282888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.283927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.283954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.284955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.284982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.285970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.285997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.286151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.286177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.286336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.286365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.286537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.286563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.286742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.286768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.286895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.286922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.287934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.287962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.288111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.459 [2024-11-02 14:51:55.288138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.459 qpair failed and we were unable to recover it. 00:36:03.459 [2024-11-02 14:51:55.288266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.288292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.288450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.288480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.288630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.288656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.288804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.288831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.288952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.288979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.289153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.289360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.289520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.289695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.289873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.289999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.290869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.290999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.291190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.291364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.291571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.291760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.291906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.291933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.292948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.292975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.293128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.293156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.293323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.293360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.293536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.293562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.293709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.293736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.293862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.293888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.294078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.294252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.294437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.294590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.460 [2024-11-02 14:51:55.294763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.460 qpair failed and we were unable to recover it. 00:36:03.460 [2024-11-02 14:51:55.294910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.294936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.295898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.295924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.296924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.296949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.297881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.297907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.298908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.298936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.299865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.299891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.300911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.300938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.301064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.301091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.461 [2024-11-02 14:51:55.301243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.461 [2024-11-02 14:51:55.301277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.461 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.301406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.301431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.301548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.301574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.301695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.301723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.301851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.301877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.302854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.302997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.303818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.303977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.304147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.304324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.304502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.304646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.304855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.304881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.305908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.305934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.306924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.306949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.307086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.307111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.307232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.307266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.307417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.307449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.307600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.462 [2024-11-02 14:51:55.307627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.462 qpair failed and we were unable to recover it. 00:36:03.462 [2024-11-02 14:51:55.307745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.307773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.307923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.307950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.308124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.308150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.308301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.308328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.308476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.308503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.308653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.308680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.308857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.308883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.309879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.309906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.310862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.310889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.311885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.311911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.312894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.312920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.313083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.313110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.313263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.313290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.313456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.463 [2024-11-02 14:51:55.313483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-11-02 14:51:55.313608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.313634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.313782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.313809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.313987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.314161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.314352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.314512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.314662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.314889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.315847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.315997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.316147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.316319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.316507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.316690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.316867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.316893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.317907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.317933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.318959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.318985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.319969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.319995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-11-02 14:51:55.320140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.464 [2024-11-02 14:51:55.320168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.320320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.320347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.320497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.320523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.320651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.320677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.320824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.320850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.320991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.321167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.321353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.321535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.321708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.321911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.321936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.322921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.322946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.323908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.323934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.324961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.324987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.325108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.325135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.325286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.325313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.325551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.325576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.325707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.325733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.325861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.325887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.326073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.326099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.326247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.326280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.326407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.326434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.326589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.326615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-11-02 14:51:55.326731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.465 [2024-11-02 14:51:55.326758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.326985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.327161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.327352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.327530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.327703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.327890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.327916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.328115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.328141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.328261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.328293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.328443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.328470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.328698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.328725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.328900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.328926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.329150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.329176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.329358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.329384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.329507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.329534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.329658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.329684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.329812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.329849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.330954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.330980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.331107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.331133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.331359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.331386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.331537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.331563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.331687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.331714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.331861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.331886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.332866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.332893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.333054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.333231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.333412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.333664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.466 [2024-11-02 14:51:55.333838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.466 qpair failed and we were unable to recover it. 00:36:03.466 [2024-11-02 14:51:55.333978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.334154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.334329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.334492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.334694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.334872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.334898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.335966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.335993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.336148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.336174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.336349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.336376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.336526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.336552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.336670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.336697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.336854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.336880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.337872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.337898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.338966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.338991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.339142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.339169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.339286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.339312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.339501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.339527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.339656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.339682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.339908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.339934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.340053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.340078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.340253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.340284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.340436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.340463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.340584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.467 [2024-11-02 14:51:55.340609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.467 qpair failed and we were unable to recover it. 00:36:03.467 [2024-11-02 14:51:55.340746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.340774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.340911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.340937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.341114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.341140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.341292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.341319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.341489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.341537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.341689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.341715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.341865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.341891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.342015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.342041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.342214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.342240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.342411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.342439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.342588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.342615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.342760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.342809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.343038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.343064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.343210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.343237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.343418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.343462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.343611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.343655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.343793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.343835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.344061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.344086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.344243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.344287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.344480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.344506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.344732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.344758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.344929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.344973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.345118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.345144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.345270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.345297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.345468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.345511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.345718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.345761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.345940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.345983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.346108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.346134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.346280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.346307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.346441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.346485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.346683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.346711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.346861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.346904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.347028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.347054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.347201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.468 [2024-11-02 14:51:55.347226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.468 qpair failed and we were unable to recover it. 00:36:03.468 [2024-11-02 14:51:55.347401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.347449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.347626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.347674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.347842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.347884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.348018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.348045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.348196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.348222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.348372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.348417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.348590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.348615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.348773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.348815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.349969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.349994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.350140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.350166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.350332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.350377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.350547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.350590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.350777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.350808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.350951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.350978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.351096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.351121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.351278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.351305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.351469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.351512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.351680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.351723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.351875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.351902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.352074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.352100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.352249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.352290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.352469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.352516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.352654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.352697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.352874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.352917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.353048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.353074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.353224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.353250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.353411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.353454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.353616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.353658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.353837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.353880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.354030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.354055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.354201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.354226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.354403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.354448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.354610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.354653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.469 qpair failed and we were unable to recover it. 00:36:03.469 [2024-11-02 14:51:55.354807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.469 [2024-11-02 14:51:55.354850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.354992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.355018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.355169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.355194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.355362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.355406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.355619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.355644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.355788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.355815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.355999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.356043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.356169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.356195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.356387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.356432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.356670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.356713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.356919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.356962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.357112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.357138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.357303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.357332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.357533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.357560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.357748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.357777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.357969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.357995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.358171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.358196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.358337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.358387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.358547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.358591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.358759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.358806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.358961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.358987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.359132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.359157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.359301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.359330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.359516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.359562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.359757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.359800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.359949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.359974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.360122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.360148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.360306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.360334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.360570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.360613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.360781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.360824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.360974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.361109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.361267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.361441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.361634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.361825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.361854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.362017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.362043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.362164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.362190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.470 [2024-11-02 14:51:55.362339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.470 [2024-11-02 14:51:55.362383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.470 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.362554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.362597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.362799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.362843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.362987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.363013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.363165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.363191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.363370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.363412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.363591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.363633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.363784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.363826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.364029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.364232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.364463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.364696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.364868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.364986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.365160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.365334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.365534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.365733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.365922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.365948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.366175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.366201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.366398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.366441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.366614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.366660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.366835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.366878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.367048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.367194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.367406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.367634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.367825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.367997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.368023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.368251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.368290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.368431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.368476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.368669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.368695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.368814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.368840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.368993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.369165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.369333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.369514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.369702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.369929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.471 [2024-11-02 14:51:55.369955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.471 qpair failed and we were unable to recover it. 00:36:03.471 [2024-11-02 14:51:55.370066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.370092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.370238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.370271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.370423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.370465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.370661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.370690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.370849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.370876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.371055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.371204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.371404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.371621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.371873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.371998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.372025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.372204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.372230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.372403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.372448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.372621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.372667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.372867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.372910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.373030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.373056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.373206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.373231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.373446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.373473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.373619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.373644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.373786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.373829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.374055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.374081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.374260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.374286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.374432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.374462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.374658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.374701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.374872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.374914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.375922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.375948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.376123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.376149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.376297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.376323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.376479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.376506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.472 qpair failed and we were unable to recover it. 00:36:03.472 [2024-11-02 14:51:55.376652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.472 [2024-11-02 14:51:55.376680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.376868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.376893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.377048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.377074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.377222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.377248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.377397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.377441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.377671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.377713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.377919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.377962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.378909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.378934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.379054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.379081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.379230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.379266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.379421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.379465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.379634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.379678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.379880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.379923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.380100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.380278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.380483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.380627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.380870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.380985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.381012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.381165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.381191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.381387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.381432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.381666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.381695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.381888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.381930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.382077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.382103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.382295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.382321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.382474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.382499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.382693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.382719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.382907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.382936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.383066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.383091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.383246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.383279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.383482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.383511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.383693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.383738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.383929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.473 [2024-11-02 14:51:55.383958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.473 qpair failed and we were unable to recover it. 00:36:03.473 [2024-11-02 14:51:55.384089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.384119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.384290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.384317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.384484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.384528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.384668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.384696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.384900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.384927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.385098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.385124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.385274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.385301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.385476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.385519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.385682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.385711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.385874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.385900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.386073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.386098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.386268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.386313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.386514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.386557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.386756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.386784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.386950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.386976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.387127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.387152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.387386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.387428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.387602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.387632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.387781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.387807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.387956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.387982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.388134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.388160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.388281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.388307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.388451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.388495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.388669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.388712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.388860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.388886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.389890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.389915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.390067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.390094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.390241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.390274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.390447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.390490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.390659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.390703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.390858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.390884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.391011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.391038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.391185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.391211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.474 qpair failed and we were unable to recover it. 00:36:03.474 [2024-11-02 14:51:55.391382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.474 [2024-11-02 14:51:55.391425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.391631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.391674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.391932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.391957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.392109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.392135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.392266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.392293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.392491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.392537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.392723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.392786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.392956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.392982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.393153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.393332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.393508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.393727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.393874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.393993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.394176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.394343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.394558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.394768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.394973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.394999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.395146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.395176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.395371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.395415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.395583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.395627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.395795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.395839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.395992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.396019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.396194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.396220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.396410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.396453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.396593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.396636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.396830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.396856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.397029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.397054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.397171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.397197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.397358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.397384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.397537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.397580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.397781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.397824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.398955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.398981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.399129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.399155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.475 [2024-11-02 14:51:55.399299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.475 [2024-11-02 14:51:55.399328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.475 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.399543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.399586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.399754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.399796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.399957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.399983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.400142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.400169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.400366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.400409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.400628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.400671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.400812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.400841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.400986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.401131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.401343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.401490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.401662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.401833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.401858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.402931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.402962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.403114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.403293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.403503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.403689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.403859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.403984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.404149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.404326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.404504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.404717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.404890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.404917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.405060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.405215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.405443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.405656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.405873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.405991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.406138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.406364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.406550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.406693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.406866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.406891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.476 qpair failed and we were unable to recover it. 00:36:03.476 [2024-11-02 14:51:55.407011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.476 [2024-11-02 14:51:55.407036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.407189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.407218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.407360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.407404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.407608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.407651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.407842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.407868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.408943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.408969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.409148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.409174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.409345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.409389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.409589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.409632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.409799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.409844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.409994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.410021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.410167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.410192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.410357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.410406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.410576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.410620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.410787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.410830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.410984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.411125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.411280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.411516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.411662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.411862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.411887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.412042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.412067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.412216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.412241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.412449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.412478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.412617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.412642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.412769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.412795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.413158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.477 [2024-11-02 14:51:55.413184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.477 qpair failed and we were unable to recover it. 00:36:03.477 [2024-11-02 14:51:55.413328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.413357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.413569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.413612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.413749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.413792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.413945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.413972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.414094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.414120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.414278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.414305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.414459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.414484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.414999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.415029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.415190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.415217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.415380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.415425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.415603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.415646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.415815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.415858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.415990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.416145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.416340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.416531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.416741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.416911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.416936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.417082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.417107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.417262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.417290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.417446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.417471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.417634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.417660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.417856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.417902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.418050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.418077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.418206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.418231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.418434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.418484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.418634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.418677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.418870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.418896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.419882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.419909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.420032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.420058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.478 [2024-11-02 14:51:55.420185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.478 [2024-11-02 14:51:55.420211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.478 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.420357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.420384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.420550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.420578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.420732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.420777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.420931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.420957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.421109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.421135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.421267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.421293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.421469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.421512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.421671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.421714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.421863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.421888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.422943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.422968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.423138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.423163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.423347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.423399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.423521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.423547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.423676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.423702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.424952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.424978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.425103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.425128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.425284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.425310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.425474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.425517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.425717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.425760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.425926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.425956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.426106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.426133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.426266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.426293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.426464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.426508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.426646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.426688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.426828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.426871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.427945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.427971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.428143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.428168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.428342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.428395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.428557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.428601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.428800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.428828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.428988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.429014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.429166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.479 [2024-11-02 14:51:55.429191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.479 qpair failed and we were unable to recover it. 00:36:03.479 [2024-11-02 14:51:55.429361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.429405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.429547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.429590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.429766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.429812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.429960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.429986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.430133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.430158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.430361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.430405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.430581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.430625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.430766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.430810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.430984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.431009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.431187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.431213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.431366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.431411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.431561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.431604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.431803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.431846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.431988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.432013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.432168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.432193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.432358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.432403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.432587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.432630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.432800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.432828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.432990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.433138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.433303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.433492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.433709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.433936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.433962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.434136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.434162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.434366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.434411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.434554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.434595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.434731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.434774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.434923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.434949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.435097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.435122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.435324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.435370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.435517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.435561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.435706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.435749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.435871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.435896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.436890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.436916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.437070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.437097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.437275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.437302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.437447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.437491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.437634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.437675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.437830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.437857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.438002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.438028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.438203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.438228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.480 [2024-11-02 14:51:55.438385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.480 [2024-11-02 14:51:55.438428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.480 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.438561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.438590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.438786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.438829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.438957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.438982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.439108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.439134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.439298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.439327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.439509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.439538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.439687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.439729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.439878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.439903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.440048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.440073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.440224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.440250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.440414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.440458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.440625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.440667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.440863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.440906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.441081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.441107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.441265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.441297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.441477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.441523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.441663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.441707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.441873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.441916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.442063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.442090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.442269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.442296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.442474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.442516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.442672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.442714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.442885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.442928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.443079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.443104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.443273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.443299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.443440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.443469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.443645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.443687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.443830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.443874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.444055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.444098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.444251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.444283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.444453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.444495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.444673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.444719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.444919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.444961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.445881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.445908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.446037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.446062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.481 [2024-11-02 14:51:55.446215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.481 [2024-11-02 14:51:55.446241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.481 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.446431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.446475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.446652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.446700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.446875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.446918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.447068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.447095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.447251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.447287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.447440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.447482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.447648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.447690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.447900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.447943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.448094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.448121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.448246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.448289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.448465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.448508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.448684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.448727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.448881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.448924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.449098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.449128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.449320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.449350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.449565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.449608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.449802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.449845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.449962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.449989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.450111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.450138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.450306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.450337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.450529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.450574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.450748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.450790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.450939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.450966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.451140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.451166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.451282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.451308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.451479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.451523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.451701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.451744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.451924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.451950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.452074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.452099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.452273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.452299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.452465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.452507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.452679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.482 [2024-11-02 14:51:55.452721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.482 qpair failed and we were unable to recover it. 00:36:03.482 [2024-11-02 14:51:55.452917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.452960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.453076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.453102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.453249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.453285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.453466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.453509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.453684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.453727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.453913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.453941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.454114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.454140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.454289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.454326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.454526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.454571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.454746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.454791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.454944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.454971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.455099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.455125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.455277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.455303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.455440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.455483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.455648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.455691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.455858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.455902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.456028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.456055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.456200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.456226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.456432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.456476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.456680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.456722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.457967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.457994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.458163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.458189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.458330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.458360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.458547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.458589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.458758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.458801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.458959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.458986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.483 [2024-11-02 14:51:55.459134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.483 [2024-11-02 14:51:55.459160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.483 qpair failed and we were unable to recover it. 00:36:03.484 [2024-11-02 14:51:55.459282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.484 [2024-11-02 14:51:55.459311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.484 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.459457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.459502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.459667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.459710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.459959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.459985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.460132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.460158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.460303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.460333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.460495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.460541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.460686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.460729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.460879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.460905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.461047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.461072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.461225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.461251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.461457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.461502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.461645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.461688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.461851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.462016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.462042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.462166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.462193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.462364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656260 is same with the state(6) to be set 00:36:03.769 [2024-11-02 14:51:55.462585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.462629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.462801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.462840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.462996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.463187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.463362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.463556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.463756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.463928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.463980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.464171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.464199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.464354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.464391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.464516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.464560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.464716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.464744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.464930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.464960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.465147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.465182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.465334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.465359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.465506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.465532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.769 [2024-11-02 14:51:55.465677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.769 [2024-11-02 14:51:55.465707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.769 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.465867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.465896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.466101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.466157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.466293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.466323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.466500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.466545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.466716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.466760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.466958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.467150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.467330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.467500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.467720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.467948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.467975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.468126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.468152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.468282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.468308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.468456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.468481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.468687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.468731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.468905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.468952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.469076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.469102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.469221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.469247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.469392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.469436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.469634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.469678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.469850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.469895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.470045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.470070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.470250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.470296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.470440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.470484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.470655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.470698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.470867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.470909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.471036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.770 [2024-11-02 14:51:55.471062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.770 qpair failed and we were unable to recover it. 00:36:03.770 [2024-11-02 14:51:55.471233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.471269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.471413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.471457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.471624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.471667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.471837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.471866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.472031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.472057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.472208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.472233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.472438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.472484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.472679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.472723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.472898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.472929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.473055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.473091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.473284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.473313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.473482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.473511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.473678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.473706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.473899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.473946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.474141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.474170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.474322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.474350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.474472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.474498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.474641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.474685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.474853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.474897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.475114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.475174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.475323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.475350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.475479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.475506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.475671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.475715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.475891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.475935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.476085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.476111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.476282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.476326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.476497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.476540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.476711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.476755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.476930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.476974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.477127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.477154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.477354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.477401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.477607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.477650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.477852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.477881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.478074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.478100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.478248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.478286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.478460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.478504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.478644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.478687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.478885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.771 [2024-11-02 14:51:55.478914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.771 qpair failed and we were unable to recover it. 00:36:03.771 [2024-11-02 14:51:55.479081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.479108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.479371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.479414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.479575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.479617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.479788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.479819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.480037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.480091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.480266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.480294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.480439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.480464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.480654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.480683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.480874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.480924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.481068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.481112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.481277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.481319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.481460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.481488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.481665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.481694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.481834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.481864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.482093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.482147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.482304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.482332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.482480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.482526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.482754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.482805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.482999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.483041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.483195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.483222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.483407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.483451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.483592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.483634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.483813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.483859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.483990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.484016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.484164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.484189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.484396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.484440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.484692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.484745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.484912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.484941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.485146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.485196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.485405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.485434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.485601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.485630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.485795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.485822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.485989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.486017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.486285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.486327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.486512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.486554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.486692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.486736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.486927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.486954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.487115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.487144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.772 qpair failed and we were unable to recover it. 00:36:03.772 [2024-11-02 14:51:55.487300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.772 [2024-11-02 14:51:55.487346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.487500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.487527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.487681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.487724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.487983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.488032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.488205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.488231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.488363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.488390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.488582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.488630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.488843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.488886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.489020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.489063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.489240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.489276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.489448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.489492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.489666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.489710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.489866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.489919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.490070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.490096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.490252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.490296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.490469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.490495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.490669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.490713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.490832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.490859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.491967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.491995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.492188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.492213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.492376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.492401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.492566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.492595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.492777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.492810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.492986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.493015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.493179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.493206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.493386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.493411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.493578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.493606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.493801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.493829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.773 [2024-11-02 14:51:55.493990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.773 [2024-11-02 14:51:55.494017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.773 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.494180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.494207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.494416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.494442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.494588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.494613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.494768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.494793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.494972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.495000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.495254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.495308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.495428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.495457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.495590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.495616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.495789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.495816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.495962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.496151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.496308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.496506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.496711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.496939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.496968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.497175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.497203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.497350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.497375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.497545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.497569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.497747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.497775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.497944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.497972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.498169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.498197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.498368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.498394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.498533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.498576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.498721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.498746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.498922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.498947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.499138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.499165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.499334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.499359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.499488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.499530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.499727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.499753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.499902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.499927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.500129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.500158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.500320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.500349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.500551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.500576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.500750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.500779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.500937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.500966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.501153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.501178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.501308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.774 [2024-11-02 14:51:55.501344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.774 qpair failed and we were unable to recover it. 00:36:03.774 [2024-11-02 14:51:55.501552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.501578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.501727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.501751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.501901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.501926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.502049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.502075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.502242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.502286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.502429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.502453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.502619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.502660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.502857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.502882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.503883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.503909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.504106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.504134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.504320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.504353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.504526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.504550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.504710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.504738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.504931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.504960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.505128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.505153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.505328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.505371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.505545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.505569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.505699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.505724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.505876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.505900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.506102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.506129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.506320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.506346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.506499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.506525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.506671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.506696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.506869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.506894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.507039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.507065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.507251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.507287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.507462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.507489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.507683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.507712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.507853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.507881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.508058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.508084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.508198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.508241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.508423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.508450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.508602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.775 [2024-11-02 14:51:55.508628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.775 qpair failed and we were unable to recover it. 00:36:03.775 [2024-11-02 14:51:55.508798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.508826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.508987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.509201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.509378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.509525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.509768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.509959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.509988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.510177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.510206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.510390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.510416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.510557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.510587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.510762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.510788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.510937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.510966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.511137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.511354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.511519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.511661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.511835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.511995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.512193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.512407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.512602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.512778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.512923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.512950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.513130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.513155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.513330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.513359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.513542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.513568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.513690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.513715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.513865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.513890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.514969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.514995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.515172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.515216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.515371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.515397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.515549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.515591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.515804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.515830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.776 [2024-11-02 14:51:55.516007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.776 [2024-11-02 14:51:55.516033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.776 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.516966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.516992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.517152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.517180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.517357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.517383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.517538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.517564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.517683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.517727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.517863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.517890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.518900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.518926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.519090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.519279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.519454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.519625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.519798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.519986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.520187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.520368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.520521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.520736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.520923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.520948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.521155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.521180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.521326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.521352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.521547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.521574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.521828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.521883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.777 [2024-11-02 14:51:55.522901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.777 [2024-11-02 14:51:55.522926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.777 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.523122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.523325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.523516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.523687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.523860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.523981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.524130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.524332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.524516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.524719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.524896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.524922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.525105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.525334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.525500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.525672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.525856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.525973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.526178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.526365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.526541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.526756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.526957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.526983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.527148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.527176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.527302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.527346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.527497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.527522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.527669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.527694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.527846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.527888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.528111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.528139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.528290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.528321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1535789 Killed "${NVMF_APP[@]}" "$@" 00:36:03.778 [2024-11-02 14:51:55.528501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.528537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 [2024-11-02 14:51:55.528665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.528692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:03.778 [2024-11-02 14:51:55.528857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.778 [2024-11-02 14:51:55.528886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.778 qpair failed and we were unable to recover it. 00:36:03.778 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:03.779 [2024-11-02 14:51:55.529025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:03.779 [2024-11-02 14:51:55.529217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.779 [2024-11-02 14:51:55.529374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.529521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.529696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.529871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.529897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.530931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.530957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.531106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.531131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.531340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.531366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.531516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.531542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.531687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.531716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.531888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.531913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.532079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.532107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.532272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.532314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.532471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.532496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.532696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.532724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.532914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.532943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.533086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.533243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1536319 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:03.779 [2024-11-02 14:51:55.533408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1536319 00:36:03.779 [2024-11-02 14:51:55.533561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1536319 ']' 00:36:03.779 [2024-11-02 14:51:55.533749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.779 addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:03.779 [2024-11-02 14:51:55.533965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.533994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.779 [2024-11-02 14:51:55.534164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.534189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:03.779 addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.534313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.779 [2024-11-02 14:51:55.534340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.534497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.534528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.534676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.534702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.534849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.779 [2024-11-02 14:51:55.534874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.779 qpair failed and we were unable to recover it. 00:36:03.779 [2024-11-02 14:51:55.535026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.535203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.535363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.535562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.535761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.535951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.535976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.536178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.536206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.536385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.536411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.536529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.536554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.536728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.536771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.536922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.536948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.537099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.537125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.537328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.537358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.537530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.537555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.537682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.537723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.537922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.537947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.538110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.538136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.538305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.538334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.538496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.538523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.538692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.538717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.538846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.538891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.539054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.539082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.539250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.539282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.539443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.539472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.539602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.539631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.539807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.539838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.540963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.540989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.541131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.541160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.541311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.541341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.541486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.541513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.541666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.541708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.541881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.541906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.542072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.542097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.542290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.780 [2024-11-02 14:51:55.542325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.780 qpair failed and we were unable to recover it. 00:36:03.780 [2024-11-02 14:51:55.542500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.542529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.542690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.542715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.542841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.542866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.543947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.543975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.544117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.544145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.544297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.544324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.544508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.544537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.544685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.544712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.544870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.544896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.545082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.545110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.545252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.545290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.545464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.545490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.545617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.545659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.545820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.545848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.546935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.546965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.547201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.547227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.547402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.547436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.547625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.547651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.547778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.547805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.547954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.547999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.548170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.548196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.548352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.548378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.548549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.548577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.548768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.548797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.548968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.548993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.549116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.549156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.549347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.549373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.549525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.549550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.781 [2024-11-02 14:51:55.549711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.781 [2024-11-02 14:51:55.549740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.781 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.549938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.549964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.550110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.550138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.550268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.550297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.550437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.550463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.550693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.550718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.550890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.550916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.551964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.551990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.552169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.552318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.552491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.552689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.552862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.552990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.553162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.553330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.553477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.553632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.553878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.553903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.554068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.554095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.554281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.554310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.554476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.554501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.554631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.554657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.554889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.554931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.555128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.555157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.555342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.555370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.555608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.555635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.555826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.555852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.555972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.555999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.556151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.556177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.556360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.556386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.556555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.556582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.556768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.556793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.556932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.556957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.782 [2024-11-02 14:51:55.557097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.782 [2024-11-02 14:51:55.557123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.782 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.557318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.557345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.557481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.557507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.557649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.557674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.557847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.557873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.558892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.558917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.559883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.559908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.560931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.560957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.561966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.561992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.562164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.562190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.562340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.562368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.562531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.562570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.562700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.562729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.562876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.783 [2024-11-02 14:51:55.562902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.783 qpair failed and we were unable to recover it. 00:36:03.783 [2024-11-02 14:51:55.563023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.563173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.563357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.563506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.563713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.563871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.563897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.564899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.564925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.565962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.565988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.566154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.566312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.566465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.566669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.566815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.566986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.567194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.567375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.567571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.567778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.567955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.567980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.568136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.568162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.568346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.568374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.568488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.568513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.568641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.568667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.568835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.784 [2024-11-02 14:51:55.568861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.784 qpair failed and we were unable to recover it. 00:36:03.784 [2024-11-02 14:51:55.569006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.569152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.569330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.569498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.569697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.569874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.569900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.570907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.570932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.571959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.571997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.572125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.572152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.572296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.572323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.572474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.572499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.572676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.572701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.572822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.572848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.573864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.573889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.574904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.574929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.575046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.575070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.575219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.575244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.575407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.575433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.785 qpair failed and we were unable to recover it. 00:36:03.785 [2024-11-02 14:51:55.575554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.785 [2024-11-02 14:51:55.575579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.575724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.575748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.575922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.575948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.576945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.576971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.577173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.577361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.577538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.577718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.577859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.577985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.578157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.578332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.578506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.578686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.578863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.578893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.579917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.579943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.580954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.580979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.581941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.581967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.786 [2024-11-02 14:51:55.582137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.786 [2024-11-02 14:51:55.582162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.786 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.582339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.582365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 [2024-11-02 14:51:55.582354] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.582431] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.787 [2024-11-02 14:51:55.582519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.582545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.582692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.582716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.582889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.582915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.583960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.583986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.584962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.584987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.585163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.585188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.585357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.585384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.585536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.585566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.585729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.585753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.585932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.585957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.586965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.586990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.587110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.587137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.587284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.587310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.587426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.587453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.587577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.787 [2024-11-02 14:51:55.587603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.787 qpair failed and we were unable to recover it. 00:36:03.787 [2024-11-02 14:51:55.587772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.587797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.587948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.587977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.588970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.588996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.589133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.589158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.589318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.589344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.589509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.589535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.589691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.589718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.589892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.589917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.590941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.590966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.591135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.591307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.591476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.591664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.591814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.591995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.592939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.592965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.593117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.593142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.593316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.593342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.593494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.593519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.593713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.593737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.788 [2024-11-02 14:51:55.593891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.788 [2024-11-02 14:51:55.593917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.788 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.594919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.594945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.595140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.595327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.595478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.595654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.595853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.595979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.596157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.596307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.596489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.596680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.596850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.596875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.597818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.597843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.598863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.598988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.599199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.599391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.599584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.599791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.789 qpair failed and we were unable to recover it. 00:36:03.789 [2024-11-02 14:51:55.599963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.789 [2024-11-02 14:51:55.599987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.600140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.600348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.600498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.600676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.600847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.600987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.601159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.601314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.601519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.601690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.601838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.601863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.602929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.602953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.603863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.603888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.604071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.604252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.604432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.604585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.604806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.604991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.605018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.605186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.605212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.605721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.605763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.605962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.605990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.606123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.606151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.606317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.606345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.606504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.606530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.606680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.606707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.606849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.606875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.790 qpair failed and we were unable to recover it. 00:36:03.790 [2024-11-02 14:51:55.607022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.790 [2024-11-02 14:51:55.607054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.607186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.607212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.607370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.607397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.607520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.607549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.607675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.607701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.607848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.607874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.608891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.608916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.609959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.609985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.610128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.610288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.610490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.610699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.610869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.610994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.611173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.611355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.611512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.611677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.611836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.611862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.612835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.612987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.613014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.613176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.613203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.613357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.613383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.791 [2024-11-02 14:51:55.613555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.791 [2024-11-02 14:51:55.613581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.791 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.613732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.613758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.613883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.613914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.614956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.614982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.615153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.615361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.615512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.615690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.615834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.615989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.616164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.616348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.616494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.616672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.616847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.616873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.617951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.617978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.618098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.618124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.618301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.618329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.618470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.618496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.618657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.618683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.618836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.618862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.619038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.619065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.619214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.619241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.619403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.792 [2024-11-02 14:51:55.619431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.792 qpair failed and we were unable to recover it. 00:36:03.792 [2024-11-02 14:51:55.619583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.619609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.619735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.619760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.619878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.619903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.620905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.620936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.621916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.621942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.622931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.622957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.623128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.623295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.623447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.623646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.623826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.623975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.624930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.624957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.625171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.625320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.625492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.625659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.625838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.793 qpair failed and we were unable to recover it. 00:36:03.793 [2024-11-02 14:51:55.625986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.793 [2024-11-02 14:51:55.626012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.626153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.626180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.626346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.626373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.626546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.626585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.626714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.626740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.626894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.626921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.627953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.627978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.628156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.628184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.628312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.628339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.628469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.628495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.628652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.628677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.628875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.628900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.629106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.629285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.629494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.629657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.629833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.629981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.630857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.630978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.631156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.631309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.631485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.631665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.631880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.631906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.632029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.632054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.632237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-11-02 14:51:55.632268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-11-02 14:51:55.632403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.632429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.632555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.632581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.632733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.632759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.632939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.632965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.633930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.633956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.634964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.634991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.635190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.635380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.635524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.635694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.635870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.635995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.636173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.636385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.636529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.636725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.636902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.636927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.637968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.637993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.638138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.638164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.638334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.638360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.638538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-11-02 14:51:55.638565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-11-02 14:51:55.638713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.638740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.638886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.638912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.639944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.639970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.640971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.640997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.641177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.641203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.641348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.641374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.641551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.641578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.641727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.641753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.641933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.641964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.642140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.642318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.642520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.642677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.642859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.642980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.643948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.643974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.644971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.644999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.645116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.645142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-11-02 14:51:55.645279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-11-02 14:51:55.645306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.645464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.645490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.645617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.645643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.645769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.645796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.645948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.645974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.646921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.646947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.647954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.647980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.648936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.648965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.649927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.649955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.650105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.650131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.650310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.650337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.650478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.650504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.650623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-11-02 14:51:55.650649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-11-02 14:51:55.650803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.650830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.650980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.651152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.651355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.651536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.651680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.651832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.651860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.798 [2024-11-02 14:51:55.652540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.652889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.652916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.653088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.653273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.653493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.653706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.653854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.653977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.654173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.654360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.654521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.654693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.654907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.654933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.655098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.655287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.655434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.655654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.655850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.655975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.656864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.656987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.657013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.657128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.657154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.657321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-11-02 14:51:55.657348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-11-02 14:51:55.657478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.657505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.657668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.657694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.657844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.657875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.657993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.658020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.658205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.658231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.658418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.658459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.658652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.658680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.658830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.658857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.658974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.659118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.659315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.659502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.659712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.659872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.659899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.660930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.660956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.661128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.661321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.661471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.661677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.661849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.661997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.662174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.662363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.662519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.662702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.662921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.662948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.663096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.663122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.663296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.663323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.663495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.663521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.663647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.663673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.663852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.663878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.664054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-11-02 14:51:55.664080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-11-02 14:51:55.664197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.664223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.664380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.664407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.664535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.664566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.664713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.664739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.664863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.664889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.665077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.665231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.665439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.665631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.665832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.665979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.666952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.666980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.667187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.667361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.667520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.667674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.667875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.667989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.668155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.668349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.668498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.668679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.668854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.668880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.669861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.669889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.670012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.670038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.670190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.670216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.670360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.670387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.670524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-11-02 14:51:55.670551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-11-02 14:51:55.670708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.670734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.670856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.670882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.671054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.671235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.671432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.671611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.671811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.671971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.672167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.672378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.672530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.672715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.672891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.672917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.673895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.673921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.674858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.674884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.675857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.675885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.676032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.676058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-11-02 14:51:55.676210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-11-02 14:51:55.676236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.676397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.676423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.676541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.676575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.676724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.676751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.676882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.676908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.677085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.677267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.677451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.677646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.677823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.677984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.678138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.678351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.678506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.678719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.678866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.678892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.679891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.679916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.680919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.680946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.681072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.681097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.681263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.681290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-11-02 14:51:55.681441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-11-02 14:51:55.681467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.681582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.681608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.681784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.681810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.681929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.681957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.682088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.682114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.682247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.682291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.682471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.682497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.682646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.682673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.682847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.682873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.683865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.683890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.684920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.684947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.685903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.685928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.686968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.686993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.687192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.687365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.687510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.687655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-11-02 14:51:55.687852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-11-02 14:51:55.687997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.688967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.688991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.689970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.689995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.690203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.690357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.690503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.690677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.690848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.690998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.691938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.691968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.692146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.692312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.692461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.692657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.692829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.692978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.693967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.693993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.694112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.694137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-11-02 14:51:55.694270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-11-02 14:51:55.694297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.694455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.694482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.694598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.694624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.694765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.694791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.694968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.694994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.695187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.695362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.695506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.695678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.695825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.695998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.696168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.696344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.696488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.696659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.696833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.696859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.697903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.697929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.698907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.698933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.699960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.699992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.700167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.700341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.700516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.700696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-11-02 14:51:55.700843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-11-02 14:51:55.700969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.700994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.701170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.701197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.701344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.701371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.701571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.701597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.701792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.701817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.701947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.701973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.702968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.702994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.703176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.703202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.703384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.703411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.703572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.703598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.703749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.703775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.703892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.703918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.704093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.704122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.704254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.704288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.704439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.704467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.704686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.704712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.704892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.704918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.705890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.705917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.706889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.706915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.707060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.707086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.707234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.707268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.707411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.707437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.707605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-11-02 14:51:55.707631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-11-02 14:51:55.707757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.707782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.707958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.707984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.708134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.708160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.708319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.708346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.708474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.708501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.708664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.708691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.708852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.708877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.709915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.709941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.710931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.710959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.711942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.711968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.712904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.712931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.713073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-11-02 14:51:55.713098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-11-02 14:51:55.713254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.713307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.713436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.713462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.713651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.713677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.713832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.713858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.714911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.714936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.715891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.715917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.716892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.716918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.717954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.717984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.718109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.718135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.718287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.718315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.718467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.718494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.718648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.718673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.718811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.718837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.719037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.719190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.719342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.719485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-11-02 14:51:55.719662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-11-02 14:51:55.719810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.719837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.719982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.720184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.720375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.720540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.720745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.720919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.720944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.721920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.721946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.722963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.722990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.723921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.723947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.724934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.724960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.725931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.725963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.726094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.726121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-11-02 14:51:55.726242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-11-02 14:51:55.726278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.726406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.726432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.726612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.726638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.726775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.726801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.726952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.726978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.727137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.727163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.727312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.727339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.727515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.727541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.727713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.727739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.727881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.727907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.728945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.728970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.729917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.729943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.730887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.730913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.731870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.731896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.732049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.732074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.732225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.732250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.732391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.732417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.732564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.732590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-11-02 14:51:55.732712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-11-02 14:51:55.732737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.732918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.732944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.733146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.733348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.733524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.733674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.733848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.733978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.734148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.734362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.734519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.734694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.734838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.734863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.735913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.735939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.736909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.736936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.737946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.737973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.738128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.738154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.738318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.738345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.738465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.738496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-11-02 14:51:55.738625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-11-02 14:51:55.738652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.738814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.738840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.738970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.738996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.739957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.739983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.740159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.740335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.740484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.740661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.740843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.740996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.741183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.741351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.741542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.741722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.741872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.741899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.742945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.742970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.743140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.743174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.743354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.743381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.743511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.743537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.743689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.743715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.743873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.743899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.744890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.744924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.745069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.745094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.812 [2024-11-02 14:51:55.745238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.812 [2024-11-02 14:51:55.745270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.812 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.745402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.745433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.745558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.745584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.745737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.745763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.745890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.745917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.746862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.746888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.747943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.747968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748370] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.813 [2024-11-02 14:51:55.748396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748406] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.813 [2024-11-02 14:51:55.748421] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-11-02 14:51:55.748422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b9 only 00:36:03.813 0 with addr=10.0.0.2, port=4420 00:36:03.813 [2024-11-02 14:51:55.748436] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748447] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.813 [2024-11-02 14:51:55.748545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:03.813 [2024-11-02 14:51:55.748767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:03.813 [2024-11-02 14:51:55.748845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.748870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.748817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:03.813 [2024-11-02 14:51:55.748820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.813 [2024-11-02 14:51:55.749019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.749832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.749980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.750179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.750361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.750509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.750672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.750822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.750847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.751019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.813 [2024-11-02 14:51:55.751045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.813 qpair failed and we were unable to recover it. 00:36:03.813 [2024-11-02 14:51:55.751207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.751233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.751375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.751401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.751568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.751598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.751718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.751745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.751867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.751893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.752919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.752945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.753094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.753239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.753412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.753599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.753811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.753981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.754947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.754972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.755090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.755116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.755242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.755283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.755466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.755492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.755651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.755676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.755835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.755863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.756874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.756993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.757019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.757135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.757161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.757284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.757312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.757437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.757463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.814 qpair failed and we were unable to recover it. 00:36:03.814 [2024-11-02 14:51:55.757575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.814 [2024-11-02 14:51:55.757601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.757725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.757750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.757877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.757903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.758862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.758888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.759862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.759978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.760153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.760328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.760494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.760697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.760878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.760904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.761853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.761988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.762168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.762365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.762507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.762706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.762887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.762913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.815 [2024-11-02 14:51:55.763036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.815 [2024-11-02 14:51:55.763062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.815 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.763249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.763291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.763528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.763554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.763684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.763710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.763939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.763967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.764908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.764933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.765087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.765119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.765350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.765378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.765512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.765538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.765658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.765683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.765857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.765883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.766077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.766102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.766260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.766287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.766502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.766528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.766652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.766678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.816 [2024-11-02 14:51:55.766826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.816 [2024-11-02 14:51:55.766852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.816 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.767946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.767972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.768956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.768981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.769118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.769312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.769454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.769640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.769846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.769996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.770931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.770958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.771959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.771985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.772965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.772992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.773145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.773175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.817 qpair failed and we were unable to recover it. 00:36:03.817 [2024-11-02 14:51:55.773300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.817 [2024-11-02 14:51:55.773335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.773568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.773603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.773733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.773759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.773881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.773908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.774902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.774928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.775902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.775928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.776883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.776911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.777903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.777931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.778081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.778107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.778224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.778250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.778482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.778508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.778639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.778667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.778848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.778873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.818 [2024-11-02 14:51:55.779914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.818 [2024-11-02 14:51:55.779940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.818 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.780873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.780986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.781130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.781328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.781500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.781744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.781957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.781988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.782917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.782943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.783891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.783917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.784960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.784986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.785921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.819 [2024-11-02 14:51:55.785951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.819 qpair failed and we were unable to recover it. 00:36:03.819 [2024-11-02 14:51:55.786069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.786226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.786397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.786616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.786761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.786914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.786942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.787951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.787978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.788932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.788960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.789876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.789903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.790102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.790128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.790266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.790303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.790462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.790488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.790689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.790715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.790944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.790970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.791917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.791942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.792169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.792195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.792321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.792348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.792465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.820 [2024-11-02 14:51:55.792501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.820 qpair failed and we were unable to recover it. 00:36:03.820 [2024-11-02 14:51:55.792634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.792660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.792801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.792827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.792973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.792999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.793884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.793911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.794928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.794953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.795944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.795971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.796905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.796933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.797075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.797101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.797228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.797263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:03.821 [2024-11-02 14:51:55.797393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.821 [2024-11-02 14:51:55.797420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:03.821 qpair failed and we were unable to recover it. 00:36:04.096 [2024-11-02 14:51:55.797552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-11-02 14:51:55.797578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-11-02 14:51:55.797764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.797790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.797912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.797939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.798874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.798901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.799957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.799982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.800111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.800137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.800288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.800315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.800498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.800523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.800676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.800701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.800897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.800922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.801861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.801887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.802950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.802976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.803096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.803122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.803363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.803389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.803529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.803559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.803689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.803715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-11-02 14:51:55.803826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-11-02 14:51:55.803854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.803994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.804931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.804957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.805877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.805904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.806920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.806946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.807883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.807995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.808281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.808484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.808637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.808806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.808951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.808978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.809889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-11-02 14:51:55.809920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-11-02 14:51:55.810050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.810863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.810986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.811910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.811937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.812858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.812884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.813853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.813879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.814807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.814833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.815859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.815884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.816034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.816060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-11-02 14:51:55.816212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-11-02 14:51:55.816237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.816361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.816393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.816523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.816549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.816669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.816695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.816821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.816848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.816972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.817904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.817930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.818938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.818963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.819973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.819999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.820120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.820147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.820305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.820331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.820537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.820563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.820720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.820745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.820861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.820888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.821903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.821929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.822070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.822095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.822240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.822272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-11-02 14:51:55.822420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-11-02 14:51:55.822446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.822594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.822620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.822742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.822767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.822887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.822920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.823150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.823176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.823328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.823354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.823505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.823531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.823703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.823729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.823848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.823874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.824854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.824881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.825831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.825857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.826855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.826994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.827133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.827335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.827501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.827675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.827864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.827890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-11-02 14:51:55.828016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-11-02 14:51:55.828042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.828285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.828312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.828461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.828487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.828602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.828627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.828761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.828787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.828902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.828928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.829867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.829996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.830144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.830292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.830434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.830589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.830845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.830871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.831847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.831873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.832849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.832876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.833853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.833880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.834006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.834036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.834171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.834198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.834336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-11-02 14:51:55.834363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-11-02 14:51:55.834518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.834544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.834667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.834693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.834832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.834858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.834983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.835906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.835931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.836894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.836919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.837062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.837205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.837472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.837671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.837849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.837988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.838140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.838398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.838581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.838738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.838889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.838914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.839945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.839971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.840899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-11-02 14:51:55.840936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-11-02 14:51:55.841084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.841291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.841457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.841638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.841810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.841959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.841985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.842947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.842973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.843885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.843910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.844879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.844906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.845954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.845979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.846123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.846149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.846320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.846347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.846469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.846496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.846723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.846748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.846892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.846918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.847034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.847060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.847182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.847208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.847347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.847373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-11-02 14:51:55.847495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-11-02 14:51:55.847521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.847680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.847706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.847865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.847892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.848804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.848976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.849967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.849993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.850916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.850941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.851872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.851899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.852935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.852974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.853099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-11-02 14:51:55.853126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-11-02 14:51:55.853263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.853289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.853440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.853465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.853617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.853643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.853762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.853788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.853904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.853930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.854939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.854964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.855938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.855963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.856896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.856923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.857886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.857913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.858881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.858994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-11-02 14:51:55.859019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-11-02 14:51:55.859143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.859291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.859432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.859607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.859752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.859902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.859928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.860890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.860915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.861920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.861946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.862839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.862872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.863903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.863929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.864927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.864954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.865083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.865109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-11-02 14:51:55.865268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-11-02 14:51:55.865295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.865408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.865434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.865554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.865579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.865720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.865746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.865891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.865916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.866044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.866083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.866254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.866297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.866528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.866554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.866678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.866704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.866823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.866849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.867829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.867855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.868848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.868978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.869943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.869969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.870904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.870930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.871078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.871103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.871219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.871246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.871376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.871405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.871525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-11-02 14:51:55.871550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-11-02 14:51:55.871671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.871697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.871828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.871853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.872960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.872986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.873959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.873987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.874953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.874981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.875181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.875375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.875554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.875701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.875850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.875977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.876919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.876946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-11-02 14:51:55.877883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-11-02 14:51:55.877908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.878853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.878977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.879971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.879997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.880888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.880915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.881859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.881990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.882964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.882992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.883127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.883155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-11-02 14:51:55.883282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-11-02 14:51:55.883310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.883454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.883480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.883596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.883621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.883793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.883818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.883963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.883989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.884115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.884141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.884315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.884355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.884518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.884546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.884679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.884713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.884868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.884894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.885847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.885874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.111 [2024-11-02 14:51:55.885994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.886178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:04.111 [2024-11-02 14:51:55.886321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.886515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:04.111 [2024-11-02 14:51:55.886658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.886841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.886869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.111 [2024-11-02 14:51:55.887046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.111 [2024-11-02 14:51:55.887199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.887387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.887540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.887682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.887856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.887883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.888888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.888915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.889042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.889068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.889187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.889212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.889371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.889398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-11-02 14:51:55.889513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-11-02 14:51:55.889539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.889664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.889690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.889836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.889862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.889986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.890157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.890342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.890503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.890652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.890859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.890884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.891945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.891973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.892959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.892985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.893942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.893968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.894872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.894899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.895838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-11-02 14:51:55.895864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-11-02 14:51:55.896015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.896175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.896347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.896510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.896709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.896880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.896907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.897843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.897993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.898179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.898352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.898496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.898753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.898897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.898922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.899859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.899988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.900925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.900951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.901883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.901995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.902020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-11-02 14:51:55.902177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-11-02 14:51:55.902204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.902327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.902354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.902482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.902508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.902627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.902653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.902803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.902829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.902940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.902966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.903861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.903886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.904866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.904893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.905911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.905938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.906916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.906941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.907868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.907992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.908025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.114 qpair failed and we were unable to recover it. 00:36:04.114 [2024-11-02 14:51:55.908197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.114 [2024-11-02 14:51:55.908223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.908357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.908384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.908503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.908529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.908652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.908678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.908791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.908816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.908970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.908995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.909970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.909996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.910130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.910155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.910302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.910329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.910490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.910515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.910635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.910660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.910820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.910849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.115 [2024-11-02 14:51:55.910996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.911150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:04.115 [2024-11-02 14:51:55.911332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.115 [2024-11-02 14:51:55.911487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.911652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.115 [2024-11-02 14:51:55.911796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.911956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.911981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.912908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.912933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.913048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.913072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.913187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.913212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.913341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.913367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.115 [2024-11-02 14:51:55.913492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.115 [2024-11-02 14:51:55.913517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.115 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.913634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.913659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.913789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.913814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.913940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.913965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.914079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.914103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.914262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.914288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.914406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.914431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.914597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.914622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.914869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.914911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.915901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.915928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.916917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.916943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.917895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.917920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.918861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.918991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.116 [2024-11-02 14:51:55.919833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.116 qpair failed and we were unable to recover it. 00:36:04.116 [2024-11-02 14:51:55.919994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.920169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.920313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.920465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.920683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.920862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.920886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.921911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.921936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.922916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.922940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x648340 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.923914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.923941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.924944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.924970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.925936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.925969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.926092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.926119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.926247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.117 [2024-11-02 14:51:55.926279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.117 qpair failed and we were unable to recover it. 00:36:04.117 [2024-11-02 14:51:55.926435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.926462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.926580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.926606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.926754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.926779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.926899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.926925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.927852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.927878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.928869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.928997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.929218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.929394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.929540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.929683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.929861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.929886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.930887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.930914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.931944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.931971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.932092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.932120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.932232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.932265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.932397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.932423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.932544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.932574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.118 qpair failed and we were unable to recover it. 00:36:04.118 [2024-11-02 14:51:55.932710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.118 [2024-11-02 14:51:55.932736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.932889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.932914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.933926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.933952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.934897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.934923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.935867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.935893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 Malloc0 00:36:04.119 [2024-11-02 14:51:55.936644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.936822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.936972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.937167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:04.119 [2024-11-02 14:51:55.937331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.119 [2024-11-02 14:51:55.937360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.119 [2024-11-02 14:51:55.937489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.937634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.937808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.119 [2024-11-02 14:51:55.937958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.119 [2024-11-02 14:51:55.937984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.119 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.938909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.938937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.939864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.939891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.120 [2024-11-02 14:51:55.940410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.940882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.940908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.941866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.941891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.942840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.942977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.943881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.943996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.944022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.944142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.120 [2024-11-02 14:51:55.944168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.120 qpair failed and we were unable to recover it. 00:36:04.120 [2024-11-02 14:51:55.944296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.944324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.944454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.944480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.944614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.944640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.944762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.944788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.944908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.944933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.945952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.945979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.946873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.946899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.947971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.947996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.948157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.948335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.948489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.121 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.948638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.121 [2024-11-02 14:51:55.948813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.121 [2024-11-02 14:51:55.948963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.948990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.949920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.949959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.121 qpair failed and we were unable to recover it. 00:36:04.121 [2024-11-02 14:51:55.950082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.121 [2024-11-02 14:51:55.950109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.950266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.950295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.950415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.950441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.950571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.950597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.950826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.950852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.950973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.951924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.951950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.952872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.952897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.953880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.953906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.954868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.954988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.955939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.955965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.956092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.956117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.122 [2024-11-02 14:51:55.956238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.122 [2024-11-02 14:51:55.956269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.122 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.956401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.956427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.956546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.123 [2024-11-02 14:51:55.956572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.956690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.956716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b9 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:04.123 0 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.956858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.123 [2024-11-02 14:51:55.956884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.123 [2024-11-02 14:51:55.957004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.957856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.957998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.958915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.958941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.959898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.959923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.960863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.960888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.961901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.961932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.962110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.123 [2024-11-02 14:51:55.962136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.123 qpair failed and we were unable to recover it. 00:36:04.123 [2024-11-02 14:51:55.962266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.962292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.962410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.962436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.962563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.962589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.962703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.962729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.962870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.962896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c8000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.963957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.963984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.964108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.964134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.964284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.964310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.964430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.964455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.124 [2024-11-02 14:51:55.964588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.964614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.124 [2024-11-02 14:51:55.964733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.964759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.124 [2024-11-02 14:51:55.964932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.124 [2024-11-02 14:51:55.964958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.965105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.965272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.965531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.965684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.965832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.965998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.966873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.966986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.967012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.967160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.967187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.967312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.967338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.124 qpair failed and we were unable to recover it. 00:36:04.124 [2024-11-02 14:51:55.967455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.124 [2024-11-02 14:51:55.967482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.967630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.967657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.967768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.967794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.967947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.967972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54c0000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.968098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.968126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.968293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.968321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.968446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.125 [2024-11-02 14:51:55.968473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54bc000b90 with addr=10.0.0.2, port=4420 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.968608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.125 [2024-11-02 14:51:55.971333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:55.971500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:55.971529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:55.971545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:55.971558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:55.971593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.125 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1535906 00:36:04.125 [2024-11-02 14:51:55.981026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:55.981151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:55.981179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:55.981193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:55.981206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:55.981237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:55.991087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:55.991215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:55.991241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:55.991265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:55.991288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:55.991320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.001154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.001293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.001320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.001334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.001346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.001377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.011036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.011180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.011207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.011221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.011234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.011272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.021017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.021146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.021172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.021186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.021199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.021229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.031074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.031214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.031245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.031277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.031292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.031332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.041084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.041213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.041239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.041254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.041277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.041308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.051141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.051278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.051305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.051320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.051333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.051362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.061174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.061308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.125 [2024-11-02 14:51:56.061334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.125 [2024-11-02 14:51:56.061347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.125 [2024-11-02 14:51:56.061360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.125 [2024-11-02 14:51:56.061390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.125 qpair failed and we were unable to recover it. 00:36:04.125 [2024-11-02 14:51:56.071213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.125 [2024-11-02 14:51:56.071344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.071371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.071385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.071398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.071427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.081208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.081358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.081384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.081405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.081419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.081450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.091246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.091376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.091402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.091416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.091428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.091461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.101333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.101456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.101483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.101497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.101509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.101540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.111287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.111411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.111438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.111452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.111465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.111496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.121420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.121581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.121607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.121621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.121634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.121664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.126 [2024-11-02 14:51:56.131369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.126 [2024-11-02 14:51:56.131529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.126 [2024-11-02 14:51:56.131557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.126 [2024-11-02 14:51:56.131571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.126 [2024-11-02 14:51:56.131588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.126 [2024-11-02 14:51:56.131620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.126 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.141453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.141580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.141606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.385 [2024-11-02 14:51:56.141621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.385 [2024-11-02 14:51:56.141634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.385 [2024-11-02 14:51:56.141670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.385 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.151399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.151527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.151553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.385 [2024-11-02 14:51:56.151567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.385 [2024-11-02 14:51:56.151580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.385 [2024-11-02 14:51:56.151610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.385 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.161425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.161569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.161595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.385 [2024-11-02 14:51:56.161609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.385 [2024-11-02 14:51:56.161621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.385 [2024-11-02 14:51:56.161651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.385 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.171460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.171594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.171621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.385 [2024-11-02 14:51:56.171641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.385 [2024-11-02 14:51:56.171655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.385 [2024-11-02 14:51:56.171684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.385 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.181481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.181607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.181632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.385 [2024-11-02 14:51:56.181646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.385 [2024-11-02 14:51:56.181659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.385 [2024-11-02 14:51:56.181689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.385 qpair failed and we were unable to recover it. 00:36:04.385 [2024-11-02 14:51:56.191496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.385 [2024-11-02 14:51:56.191609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.385 [2024-11-02 14:51:56.191635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.191649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.191663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.191706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.201547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.201676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.201702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.201716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.201730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.201760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.211586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.211709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.211735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.211749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.211762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.211792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.221577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.221732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.221759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.221773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.221786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.221816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.231624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.231748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.231775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.231789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.231802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.231832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.241737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.241879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.241905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.241919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.241932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.241963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.251683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.251800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.251826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.251840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.251853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.251883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.261698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.261828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.261859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.261874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.261886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.261916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.271745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.271874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.271900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.271914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.271927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.271957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.281766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.281890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.281915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.281929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.281942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.281971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.291840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.291979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.292005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.292019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.292032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.292061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.301864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.302021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.302048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.302062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.302075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.302113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.311849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.312002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.312029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.312043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.312056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.312086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.321923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.386 [2024-11-02 14:51:56.322087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.386 [2024-11-02 14:51:56.322114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.386 [2024-11-02 14:51:56.322129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.386 [2024-11-02 14:51:56.322143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.386 [2024-11-02 14:51:56.322186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.386 qpair failed and we were unable to recover it. 00:36:04.386 [2024-11-02 14:51:56.332037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.332168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.332195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.332209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.332223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.332253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.341971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.342093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.342119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.342133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.342147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.342178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.351952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.352074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.352106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.352121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.352134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.352163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.362003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.362128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.362154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.362168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.362181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.362211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.372022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.372152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.372178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.372192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.372205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.372235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.382052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.382166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.382192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.382206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.382219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.382267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.392163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.392281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.392307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.392322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.392335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.392371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.402150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.402291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.402318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.402332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.402345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.402375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.412144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.412273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.412300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.412314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.412327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.412358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.422216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.422356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.422385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.422405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.422419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.422451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.387 [2024-11-02 14:51:56.432267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.387 [2024-11-02 14:51:56.432398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.387 [2024-11-02 14:51:56.432424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.387 [2024-11-02 14:51:56.432439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.387 [2024-11-02 14:51:56.432453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.387 [2024-11-02 14:51:56.432484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.387 qpair failed and we were unable to recover it. 00:36:04.646 [2024-11-02 14:51:56.442286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.646 [2024-11-02 14:51:56.442441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.646 [2024-11-02 14:51:56.442467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.646 [2024-11-02 14:51:56.442481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.646 [2024-11-02 14:51:56.442495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.646 [2024-11-02 14:51:56.442525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-11-02 14:51:56.452321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.646 [2024-11-02 14:51:56.452450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.646 [2024-11-02 14:51:56.452475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.646 [2024-11-02 14:51:56.452489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.646 [2024-11-02 14:51:56.452502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.646 [2024-11-02 14:51:56.452533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-11-02 14:51:56.462304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.646 [2024-11-02 14:51:56.462426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.646 [2024-11-02 14:51:56.462453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.646 [2024-11-02 14:51:56.462467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.462480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.462509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.472334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.472460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.472485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.472499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.472513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.472544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.482365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.482495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.482521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.482535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.482554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.482587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.492362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.492483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.492509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.492524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.492537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.492567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.502429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.502583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.502610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.502624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.502637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.502680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.512434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.512584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.512610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.512623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.512636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.512667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.522473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.522607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.522633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.522647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.522660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.522691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.532485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.532614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.532641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.532655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.532668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.532698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.542533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.542659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.542684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.542699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.542711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.542739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.552569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.552699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.552725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.552739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.552752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.552782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.562701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.562829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.562854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.562867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.562880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.562911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.572704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.572833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.572859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.572879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.572892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.572922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.582643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.582770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.582795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.582809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.582820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.582849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.592643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.647 [2024-11-02 14:51:56.592768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.647 [2024-11-02 14:51:56.592794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.647 [2024-11-02 14:51:56.592808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.647 [2024-11-02 14:51:56.592820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.647 [2024-11-02 14:51:56.592851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-11-02 14:51:56.602698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.602832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.602858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.602872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.602885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.602914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.612710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.612825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.612851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.612864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.612878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.612907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.622747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.622870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.622896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.622910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.622924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.622953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.632762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.632884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.632910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.632924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.632936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.632967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.642792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.642932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.642958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.642972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.642985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.643014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.652803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.652924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.652950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.652964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.652977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.653008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.662859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.663031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.663057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.663081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.663095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.663125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.672911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.673075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.673100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.673115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.673127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.673158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.682919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.683045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.683072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.683086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.683100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.683130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-11-02 14:51:56.692912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.648 [2024-11-02 14:51:56.693036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.648 [2024-11-02 14:51:56.693063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.648 [2024-11-02 14:51:56.693077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.648 [2024-11-02 14:51:56.693091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.648 [2024-11-02 14:51:56.693120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.907 [2024-11-02 14:51:56.702978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.907 [2024-11-02 14:51:56.703109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.907 [2024-11-02 14:51:56.703135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.907 [2024-11-02 14:51:56.703149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.907 [2024-11-02 14:51:56.703162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.907 [2024-11-02 14:51:56.703193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.907 qpair failed and we were unable to recover it. 00:36:04.907 [2024-11-02 14:51:56.712975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.907 [2024-11-02 14:51:56.713098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.907 [2024-11-02 14:51:56.713124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.907 [2024-11-02 14:51:56.713137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.907 [2024-11-02 14:51:56.713151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.907 [2024-11-02 14:51:56.713182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.907 qpair failed and we were unable to recover it. 00:36:04.907 [2024-11-02 14:51:56.723022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.907 [2024-11-02 14:51:56.723154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.907 [2024-11-02 14:51:56.723180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.907 [2024-11-02 14:51:56.723194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.907 [2024-11-02 14:51:56.723206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.907 [2024-11-02 14:51:56.723236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.907 qpair failed and we were unable to recover it. 00:36:04.907 [2024-11-02 14:51:56.733026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.907 [2024-11-02 14:51:56.733149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.907 [2024-11-02 14:51:56.733175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.907 [2024-11-02 14:51:56.733189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.907 [2024-11-02 14:51:56.733202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.907 [2024-11-02 14:51:56.733232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.907 qpair failed and we were unable to recover it. 00:36:04.907 [2024-11-02 14:51:56.743142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.907 [2024-11-02 14:51:56.743272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.907 [2024-11-02 14:51:56.743298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.907 [2024-11-02 14:51:56.743312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.907 [2024-11-02 14:51:56.743325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.907 [2024-11-02 14:51:56.743356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.907 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.753088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.753204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.753235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.753250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.753275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.753307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.763273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.763470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.763496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.763510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.763523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.763553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.773204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.773342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.773368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.773382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.773395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.773427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.783248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.783381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.783409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.783423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.783436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.783468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.793360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.793488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.793515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.793530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.793542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.793581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.803269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.803404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.803430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.803444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.803456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.803487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.813288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.813416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.813442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.813456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.813469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.813501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.823315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.823482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.823508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.823522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.823534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.823564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.833338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.833456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.833482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.833497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.833509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.833538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.843470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.843603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.843635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.843655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.843669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.843699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.853390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.853514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.853540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.853553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.853566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.853596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.863426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.863553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.863579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.863593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.863606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.863639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.873465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.873581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.873607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.873621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.908 [2024-11-02 14:51:56.873634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.908 [2024-11-02 14:51:56.873665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.908 qpair failed and we were unable to recover it. 00:36:04.908 [2024-11-02 14:51:56.883605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.908 [2024-11-02 14:51:56.883751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.908 [2024-11-02 14:51:56.883777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.908 [2024-11-02 14:51:56.883792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.883805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.883841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.893514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.893646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.893672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.893686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.893699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.893729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.903592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.903721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.903747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.903761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.903774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.903804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.913585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.913719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.913744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.913759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.913772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.913801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.923628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.923766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.923792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.923806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.923819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.923862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.933646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.933771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.933802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.933817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.933830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.933859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.943697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.943821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.943847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.943862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.943875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.943904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:04.909 [2024-11-02 14:51:56.953686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.909 [2024-11-02 14:51:56.953815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.909 [2024-11-02 14:51:56.953841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.909 [2024-11-02 14:51:56.953855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.909 [2024-11-02 14:51:56.953867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:04.909 [2024-11-02 14:51:56.953897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.909 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:56.963710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:56.963836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:56.963862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:56.963876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:56.963889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:56.963919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:56.973744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:56.973881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:56.973908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:56.973921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:56.973939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:56.973983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:56.983811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:56.983933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:56.983959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:56.983974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:56.983987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:56.984016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:56.993858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:56.993980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:56.994005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:56.994020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:56.994033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:56.994063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.003910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.004037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:57.004063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:57.004077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:57.004090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:57.004119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.013840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.013963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:57.013989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:57.014003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:57.014016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:57.014045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.023899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.024040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:57.024066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:57.024081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:57.024094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:57.024125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.033926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.034069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:57.034098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:57.034113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:57.034126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:57.034157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.043959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.044091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.168 [2024-11-02 14:51:57.044118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.168 [2024-11-02 14:51:57.044132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.168 [2024-11-02 14:51:57.044145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.168 [2024-11-02 14:51:57.044174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-11-02 14:51:57.053967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.168 [2024-11-02 14:51:57.054090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.054117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.054131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.054144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.054185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.063993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.064120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.064146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.064160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.064180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.064210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.074005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.074128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.074154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.074168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.074180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.074210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.084038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.084168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.084195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.084209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.084222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.084251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.094076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.094197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.094224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.094238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.094251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.094297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.104087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.104203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.104230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.104243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.104264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.104296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.114136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.114268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.114295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.114309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.114321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.114351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.124160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.124323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.124350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.124365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.124377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.124419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.134180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.134306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.134332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.134346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.134359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.134389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.144205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.144331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.144358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.144372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.144385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.144414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.154216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.154352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.154378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.154398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.154412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.154441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.164301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.164430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.164456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.164471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.164483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.164515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.174300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.174454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.174480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.174494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.174506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.169 [2024-11-02 14:51:57.174536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-11-02 14:51:57.184337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.169 [2024-11-02 14:51:57.184462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.169 [2024-11-02 14:51:57.184489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.169 [2024-11-02 14:51:57.184508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.169 [2024-11-02 14:51:57.184522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.170 [2024-11-02 14:51:57.184553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-11-02 14:51:57.194366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.170 [2024-11-02 14:51:57.194492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.170 [2024-11-02 14:51:57.194518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.170 [2024-11-02 14:51:57.194532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.170 [2024-11-02 14:51:57.194545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.170 [2024-11-02 14:51:57.194575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-11-02 14:51:57.204389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.170 [2024-11-02 14:51:57.204511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.170 [2024-11-02 14:51:57.204537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.170 [2024-11-02 14:51:57.204551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.170 [2024-11-02 14:51:57.204564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.170 [2024-11-02 14:51:57.204594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-11-02 14:51:57.214506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.170 [2024-11-02 14:51:57.214640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.170 [2024-11-02 14:51:57.214665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.170 [2024-11-02 14:51:57.214679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.170 [2024-11-02 14:51:57.214692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.170 [2024-11-02 14:51:57.214722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.224528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.224678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.224727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.224749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.224765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.224810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.234476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.234612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.234639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.234653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.234665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.234695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.244502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.244624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.244658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.244673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.244685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.244714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.254652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.254777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.254803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.254817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.254830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.254860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.264569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.264691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.264716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.264730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.264743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.264774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.274588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.274713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.274740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.274754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.274766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.274796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.284758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.284895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.284921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.284935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.284947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.284976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.294637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.294755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.294781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.294794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.294807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.294836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.304672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.304843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.304868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.304882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.304895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.304924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.314724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.314846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.314872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.314885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.314898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.314928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.324764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.324889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.324915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.324928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.324941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.324971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.334772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.334893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.334924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.334939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.334951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.429 [2024-11-02 14:51:57.334982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.429 qpair failed and we were unable to recover it. 00:36:05.429 [2024-11-02 14:51:57.344806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.429 [2024-11-02 14:51:57.344942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.429 [2024-11-02 14:51:57.344968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.429 [2024-11-02 14:51:57.344982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.429 [2024-11-02 14:51:57.344994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.345023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.354818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.354942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.354968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.354982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.354995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.355024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.364855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.364986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.365012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.365026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.365039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.365068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.374957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.375096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.375122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.375135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.375148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.375184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.384979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.385102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.385128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.385141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.385154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.385184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.394924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.395057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.395083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.395097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.395110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.395141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.405003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.405137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.405162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.405176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.405189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.405218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.414979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.415097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.415123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.415136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.415149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.415180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.425006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.425123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.425155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.425170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.425183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.425212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.435052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.435216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.435242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.435263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.435278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.435309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.445087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.445212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.445238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.445252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.445276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.445307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.455102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.455245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.455279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.455294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.455306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.455336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.465179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.465335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.465361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.465375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.465394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.465425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.430 [2024-11-02 14:51:57.475153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.430 [2024-11-02 14:51:57.475319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.430 [2024-11-02 14:51:57.475345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.430 [2024-11-02 14:51:57.475359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.430 [2024-11-02 14:51:57.475372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.430 [2024-11-02 14:51:57.475415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.430 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.485207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.485355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.485381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.485395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.485407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.485450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.495208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.495348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.495373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.495387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.495400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.495430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.505272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.505396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.505422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.505436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.505448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.505478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.515264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.515418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.515446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.515461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.515473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.515505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.525347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.525478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.525505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.525519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.525531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.525561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.535398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.535522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.535547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.535560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.535573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.535603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.545355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.545469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.545495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.545508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.545520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.545549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.555485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.555623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.555648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.555662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.555681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.555711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.565417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.565548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.565576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.565593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.565607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.565637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.575424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.575547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.690 [2024-11-02 14:51:57.575574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.690 [2024-11-02 14:51:57.575588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.690 [2024-11-02 14:51:57.575601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.690 [2024-11-02 14:51:57.575642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-11-02 14:51:57.585464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.690 [2024-11-02 14:51:57.585584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.585609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.585622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.585634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.585663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.595470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.595593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.595619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.595633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.595645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.595674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.605541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.605677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.605704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.605723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.605738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.605768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.615559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.615688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.615715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.615729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.615742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.615773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.625589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.625709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.625735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.625749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.625761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.625804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.635673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.635796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.635823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.635836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.635849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.635878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.645632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.645755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.645781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.645801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.645814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.645844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.655699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.655867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.655892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.655906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.655919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.655948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.665720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.665845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.665870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.665884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.665898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.665928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.675719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.675850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.675876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.675890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.675903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.675931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.685802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.685935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.685961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.685974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.685987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.686017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.695778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.695914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.695940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.695954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.695966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.695995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.705816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.705950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.705975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.705989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.706001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.691 [2024-11-02 14:51:57.706031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-11-02 14:51:57.715840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.691 [2024-11-02 14:51:57.715987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.691 [2024-11-02 14:51:57.716013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.691 [2024-11-02 14:51:57.716026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.691 [2024-11-02 14:51:57.716039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.692 [2024-11-02 14:51:57.716069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-11-02 14:51:57.725876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.692 [2024-11-02 14:51:57.726043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.692 [2024-11-02 14:51:57.726069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.692 [2024-11-02 14:51:57.726082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.692 [2024-11-02 14:51:57.726093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.692 [2024-11-02 14:51:57.726124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-11-02 14:51:57.735899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.692 [2024-11-02 14:51:57.736018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.692 [2024-11-02 14:51:57.736045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.692 [2024-11-02 14:51:57.736065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.692 [2024-11-02 14:51:57.736079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.692 [2024-11-02 14:51:57.736121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.745932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.746103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.746129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.746144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.746157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.746188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.755944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.756072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.756098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.756113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.756126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.756155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.766007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.766140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.766165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.766179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.766192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.766224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.776008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.776177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.776203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.776217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.776229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.776273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.786043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.786164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.786190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.786204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.786216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.786246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.796032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.796197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.796223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.796237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.796249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.796290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.806087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.806216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.806241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.806264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.806281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.806323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.816103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.816277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.816303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.816317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.816330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.816372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.826198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.826322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.826354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.826369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.826382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.826412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.836135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.836320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.836347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.836360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.836372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.951 [2024-11-02 14:51:57.836402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.951 qpair failed and we were unable to recover it. 00:36:05.951 [2024-11-02 14:51:57.846189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.951 [2024-11-02 14:51:57.846331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.951 [2024-11-02 14:51:57.846357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.951 [2024-11-02 14:51:57.846371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.951 [2024-11-02 14:51:57.846385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.846415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.856210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.856379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.856405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.856420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.856432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.856462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.866254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.866399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.866425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.866442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.866456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.866492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.876253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.876376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.876402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.876416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.876429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.876459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.886329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.886454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.886481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.886495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.886508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.886552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.896353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.896479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.896505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.896520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.896532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.896561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.906359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.906486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.906512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.906526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.906538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.906568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.916394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.916520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.916551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.916565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.916578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.916606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.926468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.926599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.926624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.926638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.926658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.926690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.936442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.936577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.936603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.936617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.936631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.936672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.946466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.946587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.946613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.946638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.946650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.946679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.956502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.956623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.956650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.956664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.956676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.956714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.966566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.966727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.966752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.966766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.966778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.966808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.976535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.952 [2024-11-02 14:51:57.976676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.952 [2024-11-02 14:51:57.976702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.952 [2024-11-02 14:51:57.976716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.952 [2024-11-02 14:51:57.976728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.952 [2024-11-02 14:51:57.976757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.952 qpair failed and we were unable to recover it. 00:36:05.952 [2024-11-02 14:51:57.986611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.953 [2024-11-02 14:51:57.986734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.953 [2024-11-02 14:51:57.986760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.953 [2024-11-02 14:51:57.986774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.953 [2024-11-02 14:51:57.986787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.953 [2024-11-02 14:51:57.986817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.953 qpair failed and we were unable to recover it. 00:36:05.953 [2024-11-02 14:51:57.996640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.953 [2024-11-02 14:51:57.996771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.953 [2024-11-02 14:51:57.996797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.953 [2024-11-02 14:51:57.996810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.953 [2024-11-02 14:51:57.996823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:05.953 [2024-11-02 14:51:57.996852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.953 qpair failed and we were unable to recover it. 00:36:06.211 [2024-11-02 14:51:58.006666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.006845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.006871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.006885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.006898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.006929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.016686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.016818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.016852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.016866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.016879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.016930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.026707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.026832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.026858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.026871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.026883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.026912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.036778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.036926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.036953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.036966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.036979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.037010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.046767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.046897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.046923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.046937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.046956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.046998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.056803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.056972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.056999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.057018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.057031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.057060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.066832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.066951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.066977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.066991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.067004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.067045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.076842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.076982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.077007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.077022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.077034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.077064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.086925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.087093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.087118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.087132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.087145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.087174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.096929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.097093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.097120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.097134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.097146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.097187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.106962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.107087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.107113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.107127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.107140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.107170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.116960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.117098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.117124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.117139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.117151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.117181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.127008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.212 [2024-11-02 14:51:58.127139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.212 [2024-11-02 14:51:58.127165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.212 [2024-11-02 14:51:58.127179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.212 [2024-11-02 14:51:58.127191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.212 [2024-11-02 14:51:58.127222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.212 qpair failed and we were unable to recover it. 00:36:06.212 [2024-11-02 14:51:58.137023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.137148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.137174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.137194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.137207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.137237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.147035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.147203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.147229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.147243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.147264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.147297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.157134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.157286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.157313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.157327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.157339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.157369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.167144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.167319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.167345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.167358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.167369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.167401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.177127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.177248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.177281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.177295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.177309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.177339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.187176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.187312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.187338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.187352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.187363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.187396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.197183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.197344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.197380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.197394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.197408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.197438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.207242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.207412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.207438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.207452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.207465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.207495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.217320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.217445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.217471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.217484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.217498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.217539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.227250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.227378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.227404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.227424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.227438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.227467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.237326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.237497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.237524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.237538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.237550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.237580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.247411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.247543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.247569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.247582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.247595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.247625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.213 [2024-11-02 14:51:58.257350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.213 [2024-11-02 14:51:58.257502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.213 [2024-11-02 14:51:58.257527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.213 [2024-11-02 14:51:58.257541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.213 [2024-11-02 14:51:58.257554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.213 [2024-11-02 14:51:58.257584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.213 qpair failed and we were unable to recover it. 00:36:06.472 [2024-11-02 14:51:58.267404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.472 [2024-11-02 14:51:58.267569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.472 [2024-11-02 14:51:58.267595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.472 [2024-11-02 14:51:58.267609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.472 [2024-11-02 14:51:58.267622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.472 [2024-11-02 14:51:58.267651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.472 qpair failed and we were unable to recover it. 00:36:06.472 [2024-11-02 14:51:58.277441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.472 [2024-11-02 14:51:58.277563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.472 [2024-11-02 14:51:58.277589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.472 [2024-11-02 14:51:58.277603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.472 [2024-11-02 14:51:58.277615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.472 [2024-11-02 14:51:58.277658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.472 qpair failed and we were unable to recover it. 00:36:06.472 [2024-11-02 14:51:58.287478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.472 [2024-11-02 14:51:58.287605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.472 [2024-11-02 14:51:58.287636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.472 [2024-11-02 14:51:58.287652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.472 [2024-11-02 14:51:58.287665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.472 [2024-11-02 14:51:58.287695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.472 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.297488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.297620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.297646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.297660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.297673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.297703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.307525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.307653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.307680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.307694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.307707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.307737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.317530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.317650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.317682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.317697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.317711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.317741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.327659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.327789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.327815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.327828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.327841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.327871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.337582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.337703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.337729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.337742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.337754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.337783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.347600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.347724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.347750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.347763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.347776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.347805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.357626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.357748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.357774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.357787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.357801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.357837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.367675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.367843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.367870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.367884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.367901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.367933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.377722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.377847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.377874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.377888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.377900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.377932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.387761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.387878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.387905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.387919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.387931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.387960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.397777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.397932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.397958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.397971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.397983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.398013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.407842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.408015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.408048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.408066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.408079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.408110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.417867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.418046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.418074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.473 [2024-11-02 14:51:58.418088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.473 [2024-11-02 14:51:58.418101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.473 [2024-11-02 14:51:58.418132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.473 qpair failed and we were unable to recover it. 00:36:06.473 [2024-11-02 14:51:58.427835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.473 [2024-11-02 14:51:58.427957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.473 [2024-11-02 14:51:58.427983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.427998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.428012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.428042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.437934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.438058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.438084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.438098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.438110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.438140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.447934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.448064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.448091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.448110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.448124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.448161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.457926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.458086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.458113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.458127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.458139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.458170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.467956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.468111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.468138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.468152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.468165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.468194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.478058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.478205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.478231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.478245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.478266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.478299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.488026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.488157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.488183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.488197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.488209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.488239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.498131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.498246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.498289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.498304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.498317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.498347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.508157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.508288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.508313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.508327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.508340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.508369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.474 [2024-11-02 14:51:58.518097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.474 [2024-11-02 14:51:58.518216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.474 [2024-11-02 14:51:58.518242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.474 [2024-11-02 14:51:58.518263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.474 [2024-11-02 14:51:58.518279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.474 [2024-11-02 14:51:58.518310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.474 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.528137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.528282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.528317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.733 [2024-11-02 14:51:58.528331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.733 [2024-11-02 14:51:58.528344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.733 [2024-11-02 14:51:58.528381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.733 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.538211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.538368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.538395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.733 [2024-11-02 14:51:58.538413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.733 [2024-11-02 14:51:58.538432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.733 [2024-11-02 14:51:58.538462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.733 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.548176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.548314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.548340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.733 [2024-11-02 14:51:58.548355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.733 [2024-11-02 14:51:58.548367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.733 [2024-11-02 14:51:58.548409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.733 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.558200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.558363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.558390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.733 [2024-11-02 14:51:58.558404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.733 [2024-11-02 14:51:58.558416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.733 [2024-11-02 14:51:58.558445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.733 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.568222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.568360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.568387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.733 [2024-11-02 14:51:58.568400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.733 [2024-11-02 14:51:58.568413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.733 [2024-11-02 14:51:58.568445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.733 qpair failed and we were unable to recover it. 00:36:06.733 [2024-11-02 14:51:58.578249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.733 [2024-11-02 14:51:58.578372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.733 [2024-11-02 14:51:58.578398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.578412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.578424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.578453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.588315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.588455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.588480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.588493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.588505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.588534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.598327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.598447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.598473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.598487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.598499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.598529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.608375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.608502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.608528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.608541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.608554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.608584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.618350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.618471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.618496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.618509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.618522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.618551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.628393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.628513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.628538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.628552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.628570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.628602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.638430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.638556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.638582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.638595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.638608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.638636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.648481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.648608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.648634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.648648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.648661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.648703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.658536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.658712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.658738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.658752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.658765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.658796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.668551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.668671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.668696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.668710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.668723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.668766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.678628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.678751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.678776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.678789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.678802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.678832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.688578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.688706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.688732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.688745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.688758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.688787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.698639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.698809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.698836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.698850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.698862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.698891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.734 [2024-11-02 14:51:58.708609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.734 [2024-11-02 14:51:58.708735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.734 [2024-11-02 14:51:58.708760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.734 [2024-11-02 14:51:58.708774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.734 [2024-11-02 14:51:58.708787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.734 [2024-11-02 14:51:58.708817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.734 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.718657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.718788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.718814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.718834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.718848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.718878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.728712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.728873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.728899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.728913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.728925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.728955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.738799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.738922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.738948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.738961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.738974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.739004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.748722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.748844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.748869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.748882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.748895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.748924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.758786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.758961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.758987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.759001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.759014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.759042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.768860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.769041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.769067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.769081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.769093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.769123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.735 [2024-11-02 14:51:58.779061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.735 [2024-11-02 14:51:58.779206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.735 [2024-11-02 14:51:58.779231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.735 [2024-11-02 14:51:58.779245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.735 [2024-11-02 14:51:58.779265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.735 [2024-11-02 14:51:58.779297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.735 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.788922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.789095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.789121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.789135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.789147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.789176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.798908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.799033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.799059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.799073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.799086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.799114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.808995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.809164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.809195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.809209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.809222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.809252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.818975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.819103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.819129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.819143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.819155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.819185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.828966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.829086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.829112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.829125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.829138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.829168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.838997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.839118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.839144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.839158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.839171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.839200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.849048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.849218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.849244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.849266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.849284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.849314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.859047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.859191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.859219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.859233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.859249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.859294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-11-02 14:51:58.869158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.994 [2024-11-02 14:51:58.869287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.994 [2024-11-02 14:51:58.869314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.994 [2024-11-02 14:51:58.869328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.994 [2024-11-02 14:51:58.869340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.994 [2024-11-02 14:51:58.869371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.879111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.879239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.879278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.879294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.879307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.879339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.889144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.889280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.889307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.889321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.889333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.889363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.899191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.899329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.899361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.899377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.899390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.899419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.909175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.909302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.909328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.909341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.909353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.909383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.919310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.919432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.919457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.919471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.919483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.919513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.929326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.929475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.929501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.929514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.929527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.929557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.939292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.939422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.939448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.939462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.939475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.939511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.949297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.949458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.949485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.949499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.949512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.949543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.959374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.959535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.959562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.959576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.959589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.959620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.969445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.969567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.969592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.969606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.969620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.969649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.979439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.979579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.979605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.979618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.979631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.979660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.989390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.989515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.989557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.989572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.989585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.989616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:58.999436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.995 [2024-11-02 14:51:58.999558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.995 [2024-11-02 14:51:58.999585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.995 [2024-11-02 14:51:58.999604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.995 [2024-11-02 14:51:58.999619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.995 [2024-11-02 14:51:58.999649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-11-02 14:51:59.009512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.996 [2024-11-02 14:51:59.009642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.996 [2024-11-02 14:51:59.009668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.996 [2024-11-02 14:51:59.009681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.996 [2024-11-02 14:51:59.009694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.996 [2024-11-02 14:51:59.009724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-11-02 14:51:59.019511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.996 [2024-11-02 14:51:59.019652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.996 [2024-11-02 14:51:59.019679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.996 [2024-11-02 14:51:59.019693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.996 [2024-11-02 14:51:59.019705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.996 [2024-11-02 14:51:59.019735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-11-02 14:51:59.029584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.996 [2024-11-02 14:51:59.029706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.996 [2024-11-02 14:51:59.029731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.996 [2024-11-02 14:51:59.029745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.996 [2024-11-02 14:51:59.029764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.996 [2024-11-02 14:51:59.029793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-11-02 14:51:59.039578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.996 [2024-11-02 14:51:59.039716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.996 [2024-11-02 14:51:59.039742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.996 [2024-11-02 14:51:59.039756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.996 [2024-11-02 14:51:59.039769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:06.996 [2024-11-02 14:51:59.039798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.996 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-02 14:51:59.049608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.254 [2024-11-02 14:51:59.049774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.254 [2024-11-02 14:51:59.049801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.254 [2024-11-02 14:51:59.049815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.254 [2024-11-02 14:51:59.049832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.254 [2024-11-02 14:51:59.049865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.059649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.059775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.059802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.059816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.059829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.059859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.069631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.069757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.069783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.069796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.069809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.069839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.079669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.079804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.079830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.079844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.079857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.079887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.089672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.089804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.089829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.089843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.089855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.089884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.099765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.099888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.099914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.099929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.099942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.099970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.109732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.109861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.109887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.109901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.109914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.109942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.119748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.119867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.119893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.119907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.119925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.119957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.129883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.130017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.130043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.130057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.130070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.130100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.139810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.139936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.139965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.139979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.139991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.140020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.149895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.150065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.150090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.150104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.150117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.150146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.159874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.159995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.160020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.160034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.160047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.160077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.169933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.170061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.170088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.170101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.170114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.170143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.179937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.255 [2024-11-02 14:51:59.180056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.255 [2024-11-02 14:51:59.180081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.255 [2024-11-02 14:51:59.180096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.255 [2024-11-02 14:51:59.180108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.255 [2024-11-02 14:51:59.180137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-02 14:51:59.189935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.190052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.190078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.190092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.190104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.190134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.199989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.200113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.200141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.200155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.200171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.200202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.210041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.210202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.210228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.210263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.210279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.210309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.220088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.220268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.220295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.220308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.220321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.220351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.230091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.230211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.230236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.230250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.230274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.230307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.240082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.240194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.240221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.240234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.240247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.240284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.250140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.250274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.250300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.250314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.250326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.250355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.260213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.260350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.260381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.260398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.260412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.260449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.270183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.270355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.270383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.270398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.270415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.270446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.280221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.280350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.280377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.280391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.280404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.280434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.290287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.290416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.290441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.290455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.290468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.290498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-02 14:51:59.300284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.256 [2024-11-02 14:51:59.300471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.256 [2024-11-02 14:51:59.300498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.256 [2024-11-02 14:51:59.300519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.256 [2024-11-02 14:51:59.300532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.256 [2024-11-02 14:51:59.300561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.515 [2024-11-02 14:51:59.310317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.515 [2024-11-02 14:51:59.310462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.515 [2024-11-02 14:51:59.310488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.515 [2024-11-02 14:51:59.310502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.515 [2024-11-02 14:51:59.310513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.515 [2024-11-02 14:51:59.310542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.515 qpair failed and we were unable to recover it. 00:36:07.515 [2024-11-02 14:51:59.320347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.515 [2024-11-02 14:51:59.320465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.515 [2024-11-02 14:51:59.320491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.515 [2024-11-02 14:51:59.320505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.515 [2024-11-02 14:51:59.320517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.515 [2024-11-02 14:51:59.320558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.515 qpair failed and we were unable to recover it. 00:36:07.515 [2024-11-02 14:51:59.330372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.515 [2024-11-02 14:51:59.330532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.515 [2024-11-02 14:51:59.330557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.515 [2024-11-02 14:51:59.330571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.515 [2024-11-02 14:51:59.330584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.515 [2024-11-02 14:51:59.330614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.515 qpair failed and we were unable to recover it. 00:36:07.515 [2024-11-02 14:51:59.340430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.340548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.340573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.340587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.340600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.340629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.350444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.350577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.350603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.350616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.350629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.350658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.360472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.360598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.360624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.360638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.360650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.360679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.370596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.370726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.370752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.370766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.370778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.370807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.380544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.380734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.380763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.380779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.380792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.380822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.390519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.390643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.390675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.390690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.390703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.390732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.400557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.400672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.400698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.400711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.400724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.400753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.410602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.410725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.410751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.410764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.410777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.410807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.420611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.420730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.420755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.420768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.420780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.420811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.430659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.430783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.430809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.430822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.430835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.430870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.440647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.440768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.440794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.440807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.440820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.440849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.450755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.450912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.450937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.450951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.450964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.450994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.460725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.460852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.460877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.460891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.460904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.516 [2024-11-02 14:51:59.460933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.516 qpair failed and we were unable to recover it. 00:36:07.516 [2024-11-02 14:51:59.470768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.516 [2024-11-02 14:51:59.470892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.516 [2024-11-02 14:51:59.470919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.516 [2024-11-02 14:51:59.470933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.516 [2024-11-02 14:51:59.470945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.470975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.480761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.480882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.480916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.480931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.480942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.480972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.490961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.491109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.491135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.491149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.491161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.491190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.500886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.501025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.501051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.501065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.501077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.501106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.510882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.511011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.511037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.511051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.511064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.511093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.520912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.521042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.521068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.521082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.521101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.521133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.530967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.531101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.531127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.531141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.531154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.531184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.541010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.541139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.541165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.541179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.541192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.541221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.550981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.551096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.551122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.551136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.551147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.551176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.517 [2024-11-02 14:51:59.561125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.517 [2024-11-02 14:51:59.561252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.517 [2024-11-02 14:51:59.561285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.517 [2024-11-02 14:51:59.561299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.517 [2024-11-02 14:51:59.561312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.517 [2024-11-02 14:51:59.561342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.517 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.571057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.571196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.571222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.571236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.571248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.571287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.581097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.581213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.581238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.581251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.581277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.581309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.591134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.591270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.591295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.591308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.591320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.591348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.601161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.601292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.601319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.601332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.601345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.601375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.611266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.611403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.611429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.611443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.611461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.611490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.775 [2024-11-02 14:51:59.621309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.775 [2024-11-02 14:51:59.621444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.775 [2024-11-02 14:51:59.621471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.775 [2024-11-02 14:51:59.621484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.775 [2024-11-02 14:51:59.621498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.775 [2024-11-02 14:51:59.621527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.775 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.631222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.631354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.631381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.631394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.631407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.631436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.641311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.641431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.641457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.641471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.641483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.641525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.651331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.651456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.651482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.651495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.651508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.651538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.661366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.661528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.661554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.661567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.661580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.661612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.671418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.671544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.671569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.671584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.671596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.671626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.681396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.681563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.681589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.681603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.681615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.681645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.691430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.691578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.691604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.691618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.691631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.691661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.701446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.701567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.701592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.701611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.701625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.701654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.711541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.711662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.711688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.711702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.711714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.711744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.721533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.721700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.721725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.721739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.721751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.721781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.731577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.731702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.731727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.731740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.731753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.731782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.741634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.741805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.741832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.741851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.741865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.741897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.751610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.751730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.751756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.751770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.751783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.751812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.761647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.761773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.761799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.761813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.761825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.761855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.771692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.771849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.771875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.771889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.771901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.771943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.781683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.781805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.781831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.781845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.781858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.781887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.791699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.791814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.791840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.791860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.791873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.791902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.801748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.801912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.801937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.801950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.801963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.801992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.811810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.811945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.811970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.811984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.811997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.812026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:07.776 [2024-11-02 14:51:59.821809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.776 [2024-11-02 14:51:59.821942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.776 [2024-11-02 14:51:59.821968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.776 [2024-11-02 14:51:59.821981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.776 [2024-11-02 14:51:59.821995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:07.776 [2024-11-02 14:51:59.822024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.776 qpair failed and we were unable to recover it. 00:36:08.035 [2024-11-02 14:51:59.831833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.035 [2024-11-02 14:51:59.831951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.035 [2024-11-02 14:51:59.831977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.035 [2024-11-02 14:51:59.831991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.035 [2024-11-02 14:51:59.832002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.035 [2024-11-02 14:51:59.832044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.035 qpair failed and we were unable to recover it. 00:36:08.035 [2024-11-02 14:51:59.841884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.035 [2024-11-02 14:51:59.842008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.035 [2024-11-02 14:51:59.842033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.035 [2024-11-02 14:51:59.842046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.035 [2024-11-02 14:51:59.842059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.035 [2024-11-02 14:51:59.842089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.035 qpair failed and we were unable to recover it. 00:36:08.035 [2024-11-02 14:51:59.851929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.035 [2024-11-02 14:51:59.852096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.035 [2024-11-02 14:51:59.852122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.035 [2024-11-02 14:51:59.852136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.035 [2024-11-02 14:51:59.852149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.035 [2024-11-02 14:51:59.852180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.035 qpair failed and we were unable to recover it. 00:36:08.035 [2024-11-02 14:51:59.861907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.035 [2024-11-02 14:51:59.862024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.035 [2024-11-02 14:51:59.862049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.035 [2024-11-02 14:51:59.862063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.035 [2024-11-02 14:51:59.862075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.035 [2024-11-02 14:51:59.862106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.035 qpair failed and we were unable to recover it. 00:36:08.035 [2024-11-02 14:51:59.871971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.035 [2024-11-02 14:51:59.872096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.035 [2024-11-02 14:51:59.872123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.872136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.872149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.872179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.882002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.882125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.882159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.882177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.882191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.882222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.892110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.892238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.892272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.892288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.892300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.892331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.902050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.902196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.902226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.902242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.902266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.902303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.912064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.912190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.912217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.912231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.912243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.912283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.922092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.922215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.922241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.922262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.922280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.922316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.932165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.932302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.932328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.932342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.932355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.932384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.942191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.942316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.942342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.942356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.942369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.942398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.952212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.952341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.952368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.952382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.952395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.952424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.962252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.962391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.962417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.962430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.962444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.962473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.972252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.972392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.972422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.972437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.972450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.972479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.982301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.982461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.982487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.982501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.982513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.036 [2024-11-02 14:51:59.982543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.036 qpair failed and we were unable to recover it. 00:36:08.036 [2024-11-02 14:51:59.992307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.036 [2024-11-02 14:51:59.992466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.036 [2024-11-02 14:51:59.992492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.036 [2024-11-02 14:51:59.992506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.036 [2024-11-02 14:51:59.992518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:51:59.992561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.002354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.002478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.002504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.002518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.002530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.002560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.012400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.012533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.012561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.012576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.012589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.012627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.022437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.022561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.022589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.022603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.022619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.022661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.032458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.032588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.032618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.032632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.032646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.032677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.042542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.042710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.042737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.042752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.042765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.042795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.052603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.052743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.052769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.052783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.052796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.052826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.062554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.062680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.062713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.062728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.062740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.062770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.072581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.072705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.072732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.072745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.072758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.072787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.037 [2024-11-02 14:52:00.082667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.037 [2024-11-02 14:52:00.082796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.037 [2024-11-02 14:52:00.082823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.037 [2024-11-02 14:52:00.082836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.037 [2024-11-02 14:52:00.082849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.037 [2024-11-02 14:52:00.082879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.037 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.092623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.092752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.092779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.092793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.092805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.092848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.102717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.102852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.102879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.102893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.102911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.102943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.112704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.112868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.112896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.112916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.112929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.112959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.122678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.122813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.122839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.122853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.122867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.122896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.132767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.132937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.132964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.132978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.132990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.133020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.142740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.142866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.142893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.142907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.296 [2024-11-02 14:52:00.142919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.296 [2024-11-02 14:52:00.142949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.296 qpair failed and we were unable to recover it. 00:36:08.296 [2024-11-02 14:52:00.152798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.296 [2024-11-02 14:52:00.152919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.296 [2024-11-02 14:52:00.152945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.296 [2024-11-02 14:52:00.152959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.152972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.153004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.162822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.162966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.162992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.163009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.163022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.163052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.172940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.173068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.173094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.173108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.173121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.173150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.182903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.183033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.183062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.183077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.183090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.183119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.192873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.192995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.193022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.193036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.193054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.193085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.202928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.203053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.203079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.203094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.203107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.203136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.212983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.213148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.213174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.213188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.213201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.213230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.223010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.223176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.223204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.223223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.223237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.223280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.233089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.233224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.233250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.233276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.233290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.233321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.243173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.243310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.243340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.243355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.243368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.243401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.253070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.253200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.253226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.253239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.253252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.253293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.263095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.263220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.263246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.263270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.263285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.263314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.273105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.273233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.273269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.273289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.273302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.273333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.297 [2024-11-02 14:52:00.283141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.297 [2024-11-02 14:52:00.283292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.297 [2024-11-02 14:52:00.283319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.297 [2024-11-02 14:52:00.283343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.297 [2024-11-02 14:52:00.283357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.297 [2024-11-02 14:52:00.283386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.297 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.293277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.293414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.293439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.293452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.293466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.293495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.303198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.303330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.303357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.303370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.303383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.303413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.313310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.313437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.313463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.313477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.313489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.313521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.323236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.323360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.323386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.323400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.323411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.323439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.333330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.333462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.333488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.333501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.333514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.333543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.298 [2024-11-02 14:52:00.343342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.298 [2024-11-02 14:52:00.343516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.298 [2024-11-02 14:52:00.343543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.298 [2024-11-02 14:52:00.343556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.298 [2024-11-02 14:52:00.343569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.298 [2024-11-02 14:52:00.343600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.298 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.353339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.353460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.353486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.353499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.353512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.353542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.363364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.363478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.363503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.363517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.363529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.363558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.373400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.373530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.373560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.373575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.373588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.373617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.383442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.383565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.383592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.383606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.383619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.383648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.393468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.393605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.393630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.393644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.393656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.393687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.403482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.403605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.403631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.403644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.403657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.403686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.413645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.413835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.413861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.413874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.413887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.413922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.423634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.423759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.423785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.423799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.423811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.423840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.433561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.433680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.433706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.433720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-11-02 14:52:00.433732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.557 [2024-11-02 14:52:00.433761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-11-02 14:52:00.443597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-11-02 14:52:00.443716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-11-02 14:52:00.443742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-11-02 14:52:00.443756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.443769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.443798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.453637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.453781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.453809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.453824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.453837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.453867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.463651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.463772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.463804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.463819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.463831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.463862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.473699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.473854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.473879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.473893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.473906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.473936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.483788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.483911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.483937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.483951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.483964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.483997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.493750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.493889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.493915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.493929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.493941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.493973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.503762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.503884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.503910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.503923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.503936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.503971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.513815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.513939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.513965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.513979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.513995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.514038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.523801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.523951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.523977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.523991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.524004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.524034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.533839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.533973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.533998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.534012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.534025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.534054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.543952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.544086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.544112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.544126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.544139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.544168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.553880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.554004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.554035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.554050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.554061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.554090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.563983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.564144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.564170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.564183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.564195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.564224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.573961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.558 [2024-11-02 14:52:00.574092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.558 [2024-11-02 14:52:00.574117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.558 [2024-11-02 14:52:00.574131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.558 [2024-11-02 14:52:00.574144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.558 [2024-11-02 14:52:00.574175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.558 qpair failed and we were unable to recover it. 00:36:08.558 [2024-11-02 14:52:00.584057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.559 [2024-11-02 14:52:00.584174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.559 [2024-11-02 14:52:00.584200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.559 [2024-11-02 14:52:00.584213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.559 [2024-11-02 14:52:00.584226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.559 [2024-11-02 14:52:00.584264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.559 qpair failed and we were unable to recover it. 00:36:08.559 [2024-11-02 14:52:00.593993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.559 [2024-11-02 14:52:00.594120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.559 [2024-11-02 14:52:00.594145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.559 [2024-11-02 14:52:00.594158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.559 [2024-11-02 14:52:00.594175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.559 [2024-11-02 14:52:00.594204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.559 qpair failed and we were unable to recover it. 00:36:08.559 [2024-11-02 14:52:00.604036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.559 [2024-11-02 14:52:00.604150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.559 [2024-11-02 14:52:00.604176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.559 [2024-11-02 14:52:00.604190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.559 [2024-11-02 14:52:00.604203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.559 [2024-11-02 14:52:00.604231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.559 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.614080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.614204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.614229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.614243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.614261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.614294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.624097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.624225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.624251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.624276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.624289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.624318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.634142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.634283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.634308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.634322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.634334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.634364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.644150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.644287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.644313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.644327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.644340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.644370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.654281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.654415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.654439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.654453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.654466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.654495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.664204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.664338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.664364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.664378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.664391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.664420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.674262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.674387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.674412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.674426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.674439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.674469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.684314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.684484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.684510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.684523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.684541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.684574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.694322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.694447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.694473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.694486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.694499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.694529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.704383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.704518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.704544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.704558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.704570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.704600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.714359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.714528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.714554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.714568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.714580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.714609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.724372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.724488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.724514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.724528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.724540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.818 [2024-11-02 14:52:00.724570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-11-02 14:52:00.734430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-11-02 14:52:00.734562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-11-02 14:52:00.734587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-11-02 14:52:00.734601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-11-02 14:52:00.734613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.734643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.744485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.744608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.744634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.744648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.744661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.744690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.754520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.754685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.754711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.754725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.754737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.754767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.764498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.764618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.764643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.764656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.764669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.764712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.774652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.774789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.774815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.774834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.774848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.774878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.784653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.784802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.784828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.784842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.784855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.784885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.794593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.794712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.794738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.794752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.794764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.794794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.804632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.804754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.804780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.804793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.804806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.804835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.814688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.814810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.814836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.814849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.814861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.814890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.824675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.824810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.824836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.824849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.824862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.824891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.834752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.834910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.834936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.834950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.834962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.834991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.844760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.844900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.844926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.844939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.844953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.844982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.854840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.854963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.854988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-11-02 14:52:00.855002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-11-02 14:52:00.855014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.819 [2024-11-02 14:52:00.855045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-11-02 14:52:00.864810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-11-02 14:52:00.864935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-11-02 14:52:00.864961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.820 [2024-11-02 14:52:00.864981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.820 [2024-11-02 14:52:00.864995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:08.820 [2024-11-02 14:52:00.865037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.820 qpair failed and we were unable to recover it. 00:36:09.078 [2024-11-02 14:52:00.874804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.078 [2024-11-02 14:52:00.874957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.078 [2024-11-02 14:52:00.874983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.078 [2024-11-02 14:52:00.874997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.078 [2024-11-02 14:52:00.875010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.078 [2024-11-02 14:52:00.875040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.078 qpair failed and we were unable to recover it. 00:36:09.078 [2024-11-02 14:52:00.884844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.078 [2024-11-02 14:52:00.884965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.078 [2024-11-02 14:52:00.884991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.078 [2024-11-02 14:52:00.885005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.078 [2024-11-02 14:52:00.885018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.078 [2024-11-02 14:52:00.885049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.078 qpair failed and we were unable to recover it. 00:36:09.078 [2024-11-02 14:52:00.894873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.078 [2024-11-02 14:52:00.895010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.078 [2024-11-02 14:52:00.895036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.078 [2024-11-02 14:52:00.895049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.078 [2024-11-02 14:52:00.895062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.078 [2024-11-02 14:52:00.895092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.078 qpair failed and we were unable to recover it. 00:36:09.078 [2024-11-02 14:52:00.904900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.078 [2024-11-02 14:52:00.905017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.078 [2024-11-02 14:52:00.905043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.078 [2024-11-02 14:52:00.905057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.078 [2024-11-02 14:52:00.905069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.078 [2024-11-02 14:52:00.905110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.078 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.914931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.915054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.915081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.915094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.915107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.915135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.924928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.925055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.925080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.925094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.925106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.925137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.934995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.935174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.935200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.935213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.935225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.935262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.945008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.945127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.945153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.945166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.945178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.945208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.955017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.955143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.955174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.955189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.955202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.955230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.965043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.965158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.965184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.965197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.965209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.965239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.975089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.975221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.975247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.975269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.975284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.975325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.985243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.985433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.985458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.985473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.985486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.985515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:00.995131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:00.995250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:00.995285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:00.995299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:00.995311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:00.995355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:01.005194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:01.005314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:01.005340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:01.005353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:01.005366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:01.005395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:01.015276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:01.015444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:01.015470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:01.015483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:01.015496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:01.015526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:01.025272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:01.025427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:01.025455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:01.025469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:01.025486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:01.025517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:01.035241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:01.035374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-11-02 14:52:01.035400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-11-02 14:52:01.035413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-11-02 14:52:01.035426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.079 [2024-11-02 14:52:01.035457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-11-02 14:52:01.045319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-11-02 14:52:01.045442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.045474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.045489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.045501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.045531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.055359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.055499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.055525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.055539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.055551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.055594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.065380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.065534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.065560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.065573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.065586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.065627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.075444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.075568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.075594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.075608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.075620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.075651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.085397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.085528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.085555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.085569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.085587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.085617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.095433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.095561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.095587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.095601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.095613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.095642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.105454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.105576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.105602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.105615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.105628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.105658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.115464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.115582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.115608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.115622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.115634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.115665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-11-02 14:52:01.125559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-11-02 14:52:01.125732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-11-02 14:52:01.125758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-11-02 14:52:01.125772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-11-02 14:52:01.125784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.080 [2024-11-02 14:52:01.125814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.135666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.135801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.135827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.135841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.135853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.135883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.145618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.145742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.145768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.145781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.145793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.145823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.155611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.155731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.155756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.155770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.155783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.155813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.165675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.165842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.165867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.165881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.165894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.165924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.175665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.175797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.175823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.175836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.175855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.175887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.185736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.185888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.185915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.185929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.185942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.185971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.195759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.195883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.195913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.195927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.195940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.195969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.205727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.205851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.205877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.205890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.205903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.205948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.215783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.215911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.215936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.215950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.215962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.215992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.225836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.225990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.226016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.226031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.226044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.226072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.235857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.236032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.339 [2024-11-02 14:52:01.236064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.339 [2024-11-02 14:52:01.236083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.339 [2024-11-02 14:52:01.236097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.339 [2024-11-02 14:52:01.236126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.339 qpair failed and we were unable to recover it. 00:36:09.339 [2024-11-02 14:52:01.245884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.339 [2024-11-02 14:52:01.246018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.246046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.246060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.246073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.246103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.255890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.256018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.256044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.256057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.256070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.256100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.265940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.266087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.266113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.266133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.266146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.266176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.275942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.276068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.276094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.276108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.276121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.276151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.285961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.286097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.286124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.286138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.286150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.286192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.295994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.296126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.296154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.296172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.296187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.296216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.306029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.306195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.306221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.306235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.306248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.306287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.316057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.316179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.316206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.316219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.316232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.316274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.326063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.326209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.326235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.326249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.326272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.326303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.336104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.336239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.336280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.336296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.336308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.336338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.346214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.346354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.346381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.346395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.346407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.346437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.356201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.356327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.356353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.356373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.356387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.356417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.366174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.366310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.366337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.340 [2024-11-02 14:52:01.366351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.340 [2024-11-02 14:52:01.366363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.340 [2024-11-02 14:52:01.366394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.340 qpair failed and we were unable to recover it. 00:36:09.340 [2024-11-02 14:52:01.376204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.340 [2024-11-02 14:52:01.376356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.340 [2024-11-02 14:52:01.376382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-11-02 14:52:01.376396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-11-02 14:52:01.376409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.341 [2024-11-02 14:52:01.376438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.341 qpair failed and we were unable to recover it. 00:36:09.341 [2024-11-02 14:52:01.386265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.341 [2024-11-02 14:52:01.386391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.341 [2024-11-02 14:52:01.386417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-11-02 14:52:01.386430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-11-02 14:52:01.386443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.341 [2024-11-02 14:52:01.386473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.341 qpair failed and we were unable to recover it. 00:36:09.599 [2024-11-02 14:52:01.396384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.599 [2024-11-02 14:52:01.396517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.599 [2024-11-02 14:52:01.396543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.599 [2024-11-02 14:52:01.396556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.599 [2024-11-02 14:52:01.396569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.599 [2024-11-02 14:52:01.396600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.599 qpair failed and we were unable to recover it. 00:36:09.599 [2024-11-02 14:52:01.406298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.599 [2024-11-02 14:52:01.406412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.599 [2024-11-02 14:52:01.406438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.599 [2024-11-02 14:52:01.406452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.599 [2024-11-02 14:52:01.406464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.599 [2024-11-02 14:52:01.406495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.599 qpair failed and we were unable to recover it. 00:36:09.599 [2024-11-02 14:52:01.416321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.599 [2024-11-02 14:52:01.416448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.599 [2024-11-02 14:52:01.416474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.599 [2024-11-02 14:52:01.416488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.599 [2024-11-02 14:52:01.416501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.416532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.426397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.426517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.426543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.426557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.426570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.426600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.436384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.436511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.436537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.436551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.436563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.436604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.446398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.446516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.446548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.446564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.446577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.446628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.456478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.456607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.456640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.456654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.456667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.456696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.466491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.466627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.466654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.466668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.466681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.466710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.476501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.476624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.476650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.476664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.476677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.476706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.486520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.486635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.486661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.486675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.486688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.486723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.496577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.496709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.496734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.496747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.496760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.496791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.506582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.506705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.506731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.506744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.506756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.506786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.516658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.516782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.516808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.516822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.516835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.516866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.526631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.526755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.526780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.526794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.526807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.526837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.536754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.536888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.536919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.536934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.536947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.536977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.600 [2024-11-02 14:52:01.546794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.600 [2024-11-02 14:52:01.546915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.600 [2024-11-02 14:52:01.546940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.600 [2024-11-02 14:52:01.546953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.600 [2024-11-02 14:52:01.546966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.600 [2024-11-02 14:52:01.546995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.600 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.556759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.556882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.556908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.556922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.556933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.556962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.566801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.566925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.566951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.566965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.566977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.567008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.576818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.576944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.576970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.576984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.576996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.577032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.586844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.586982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.587008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.587021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.587034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.587063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.596858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.596979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.597004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.597018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.597029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.597058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.606890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.607009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.607035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.607049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.607062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.607091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.616931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.617089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.617114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.617128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.617141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.617171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.626962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.627102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.627137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.627156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.627169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.627199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.636974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.637104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.637130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.637144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.637157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.637186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.601 [2024-11-02 14:52:01.646979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.601 [2024-11-02 14:52:01.647142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.601 [2024-11-02 14:52:01.647168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.601 [2024-11-02 14:52:01.647182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.601 [2024-11-02 14:52:01.647195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.601 [2024-11-02 14:52:01.647224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.601 qpair failed and we were unable to recover it. 00:36:09.860 [2024-11-02 14:52:01.657015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.860 [2024-11-02 14:52:01.657142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.860 [2024-11-02 14:52:01.657167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.860 [2024-11-02 14:52:01.657181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.860 [2024-11-02 14:52:01.657193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.860 [2024-11-02 14:52:01.657223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.860 qpair failed and we were unable to recover it. 00:36:09.860 [2024-11-02 14:52:01.667078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.860 [2024-11-02 14:52:01.667216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.860 [2024-11-02 14:52:01.667243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.860 [2024-11-02 14:52:01.667274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.860 [2024-11-02 14:52:01.667295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.860 [2024-11-02 14:52:01.667326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.860 qpair failed and we were unable to recover it. 00:36:09.860 [2024-11-02 14:52:01.677083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.860 [2024-11-02 14:52:01.677215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.860 [2024-11-02 14:52:01.677241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.860 [2024-11-02 14:52:01.677262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.860 [2024-11-02 14:52:01.677278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.860 [2024-11-02 14:52:01.677308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.860 qpair failed and we were unable to recover it. 00:36:09.860 [2024-11-02 14:52:01.687085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.860 [2024-11-02 14:52:01.687203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.860 [2024-11-02 14:52:01.687230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.860 [2024-11-02 14:52:01.687243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.860 [2024-11-02 14:52:01.687263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.687296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.697137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.697275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.697301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.697314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.697327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.697370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.707246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.707372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.707398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.707412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.707424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.707454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.717180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.717305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.717331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.717344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.717357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.717388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.727291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.727406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.727432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.727446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.727459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.727489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.737242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.737380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.737406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.737420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.737432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.737463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.747326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.747455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.747481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.747495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.747508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.747537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.757312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.757434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.757460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.757480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.757494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.757523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.767446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.767592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.767618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.767631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.767643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.767674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.777378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.777508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.777535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.777548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.777561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.777591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.787409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.787548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.787574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.787588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.787601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.787631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.797413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.797529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.797554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.797568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.797581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.797610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.807558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.807678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.807703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.807717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.807730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.807759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.861 qpair failed and we were unable to recover it. 00:36:09.861 [2024-11-02 14:52:01.817480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.861 [2024-11-02 14:52:01.817607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.861 [2024-11-02 14:52:01.817633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.861 [2024-11-02 14:52:01.817647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.861 [2024-11-02 14:52:01.817660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.861 [2024-11-02 14:52:01.817689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.827613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.827732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.827758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.827771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.827784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.827813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.837586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.837712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.837739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.837757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.837771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.837801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.847578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.847701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.847728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.847748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.847761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.847790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.857653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.857784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.857810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.857824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.857836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.857866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.867639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.867764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.867791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.867805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.867817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.867858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.877660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.877826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.877852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.877866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.877878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54bc000b90 00:36:09.862 [2024-11-02 14:52:01.877921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.887715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.887836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.887870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.887886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.887900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54c0000b90 00:36:09.862 [2024-11-02 14:52:01.887930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.897694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.897863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.897891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.897905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.897919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54c0000b90 00:36:09.862 [2024-11-02 14:52:01.897949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.862 qpair failed and we were unable to recover it. 00:36:09.862 [2024-11-02 14:52:01.907805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.862 [2024-11-02 14:52:01.907928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.862 [2024-11-02 14:52:01.907960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.862 [2024-11-02 14:52:01.907976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.862 [2024-11-02 14:52:01.907989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54c8000b90 00:36:09.862 [2024-11-02 14:52:01.908019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.862 qpair failed and we were unable to recover it. 00:36:10.121 [2024-11-02 14:52:01.917771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.121 [2024-11-02 14:52:01.917901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.121 [2024-11-02 14:52:01.917929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.121 [2024-11-02 14:52:01.917943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.121 [2024-11-02 14:52:01.917956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54c8000b90 00:36:10.121 [2024-11-02 14:52:01.917992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.121 qpair failed and we were unable to recover it. 00:36:10.121 [2024-11-02 14:52:01.927824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.121 [2024-11-02 14:52:01.927952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.121 [2024-11-02 14:52:01.927991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.121 [2024-11-02 14:52:01.928011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.121 [2024-11-02 14:52:01.928025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x648340 00:36:10.121 [2024-11-02 14:52:01.928055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.121 qpair failed and we were unable to recover it. 00:36:10.121 [2024-11-02 14:52:01.937840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.121 [2024-11-02 14:52:01.937977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.121 [2024-11-02 14:52:01.938010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.121 [2024-11-02 14:52:01.938026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.121 [2024-11-02 14:52:01.938039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x648340 00:36:10.121 [2024-11-02 14:52:01.938068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.121 qpair failed and we were unable to recover it. 00:36:10.121 [2024-11-02 14:52:01.938180] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:10.121 A controller has encountered a failure and is being reset. 00:36:10.121 [2024-11-02 14:52:01.938242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656260 (9): Bad file descriptor 00:36:10.121 Controller properly reset. 00:36:10.121 Initializing NVMe Controllers 00:36:10.121 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:10.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:10.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:10.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:10.121 Initialization complete. Launching workers. 00:36:10.121 Starting thread on core 1 00:36:10.121 Starting thread on core 2 00:36:10.121 Starting thread on core 3 00:36:10.121 Starting thread on core 0 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:10.121 00:36:10.121 real 0m10.878s 00:36:10.121 user 0m18.147s 00:36:10.121 sys 0m5.314s 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:10.121 ************************************ 00:36:10.121 END TEST nvmf_target_disconnect_tc2 00:36:10.121 ************************************ 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.121 14:52:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.121 rmmod nvme_tcp 00:36:10.121 rmmod nvme_fabrics 00:36:10.121 rmmod nvme_keyring 00:36:10.121 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 1536319 ']' 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 1536319 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1536319 ']' 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1536319 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536319 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536319' 00:36:10.122 killing process with pid 1536319 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1536319 00:36:10.122 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1536319 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.381 14:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.917 14:52:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.918 00:36:12.918 real 0m15.763s 00:36:12.918 user 0m44.917s 00:36:12.918 sys 0m7.325s 00:36:12.918 14:52:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.918 14:52:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:12.918 ************************************ 00:36:12.918 END TEST nvmf_target_disconnect 00:36:12.918 ************************************ 00:36:12.918 14:52:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:12.918 00:36:12.918 real 6m43.771s 00:36:12.918 user 17m4.962s 00:36:12.918 sys 1m29.538s 00:36:12.918 14:52:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.918 14:52:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.918 ************************************ 00:36:12.918 END TEST nvmf_host 00:36:12.918 ************************************ 00:36:12.918 14:52:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:12.918 14:52:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:12.918 14:52:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:12.918 14:52:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:12.918 14:52:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.918 14:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.918 ************************************ 00:36:12.918 START TEST nvmf_target_core_interrupt_mode 00:36:12.918 ************************************ 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:12.918 * Looking for test storage... 00:36:12.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.918 --rc genhtml_branch_coverage=1 00:36:12.918 --rc genhtml_function_coverage=1 00:36:12.918 --rc genhtml_legend=1 00:36:12.918 --rc geninfo_all_blocks=1 00:36:12.918 --rc geninfo_unexecuted_blocks=1 00:36:12.918 00:36:12.918 ' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.918 --rc genhtml_branch_coverage=1 00:36:12.918 --rc genhtml_function_coverage=1 00:36:12.918 --rc genhtml_legend=1 00:36:12.918 --rc geninfo_all_blocks=1 00:36:12.918 --rc geninfo_unexecuted_blocks=1 00:36:12.918 00:36:12.918 ' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.918 --rc genhtml_branch_coverage=1 00:36:12.918 --rc genhtml_function_coverage=1 00:36:12.918 --rc genhtml_legend=1 00:36:12.918 --rc geninfo_all_blocks=1 00:36:12.918 --rc geninfo_unexecuted_blocks=1 00:36:12.918 00:36:12.918 ' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.918 --rc genhtml_branch_coverage=1 00:36:12.918 --rc genhtml_function_coverage=1 00:36:12.918 --rc genhtml_legend=1 00:36:12.918 --rc geninfo_all_blocks=1 00:36:12.918 --rc geninfo_unexecuted_blocks=1 00:36:12.918 00:36:12.918 ' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.918 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:12.919 ************************************ 00:36:12.919 START TEST nvmf_abort 00:36:12.919 ************************************ 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:12.919 * Looking for test storage... 00:36:12.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.919 --rc genhtml_branch_coverage=1 00:36:12.919 --rc genhtml_function_coverage=1 00:36:12.919 --rc genhtml_legend=1 00:36:12.919 --rc geninfo_all_blocks=1 00:36:12.919 --rc geninfo_unexecuted_blocks=1 00:36:12.919 00:36:12.919 ' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.919 --rc genhtml_branch_coverage=1 00:36:12.919 --rc genhtml_function_coverage=1 00:36:12.919 --rc genhtml_legend=1 00:36:12.919 --rc geninfo_all_blocks=1 00:36:12.919 --rc geninfo_unexecuted_blocks=1 00:36:12.919 00:36:12.919 ' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.919 --rc genhtml_branch_coverage=1 00:36:12.919 --rc genhtml_function_coverage=1 00:36:12.919 --rc genhtml_legend=1 00:36:12.919 --rc geninfo_all_blocks=1 00:36:12.919 --rc geninfo_unexecuted_blocks=1 00:36:12.919 00:36:12.919 ' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.919 --rc genhtml_branch_coverage=1 00:36:12.919 --rc genhtml_function_coverage=1 00:36:12.919 --rc genhtml_legend=1 00:36:12.919 --rc geninfo_all_blocks=1 00:36:12.919 --rc geninfo_unexecuted_blocks=1 00:36:12.919 00:36:12.919 ' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.919 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.920 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:14.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:14.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:14.823 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:14.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:14.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:36:14.824 00:36:14.824 --- 10.0.0.2 ping statistics --- 00:36:14.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.824 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:14.824 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:36:15.083 00:36:15.083 --- 10.0.0.1 ping statistics --- 00:36:15.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.083 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=1539121 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 1539121 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1539121 ']' 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.083 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.083 [2024-11-02 14:52:06.954713] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.083 [2024-11-02 14:52:06.955772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:15.083 [2024-11-02 14:52:06.955843] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.083 [2024-11-02 14:52:07.029795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:15.083 [2024-11-02 14:52:07.123557] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.083 [2024-11-02 14:52:07.123635] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.083 [2024-11-02 14:52:07.123652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.083 [2024-11-02 14:52:07.123683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.083 [2024-11-02 14:52:07.123696] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.083 [2024-11-02 14:52:07.123785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.083 [2024-11-02 14:52:07.123903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.083 [2024-11-02 14:52:07.123906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.342 [2024-11-02 14:52:07.224725] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.342 [2024-11-02 14:52:07.224903] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:15.342 [2024-11-02 14:52:07.224919] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.342 [2024-11-02 14:52:07.225209] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 [2024-11-02 14:52:07.280580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 Malloc0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 Delay0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 [2024-11-02 14:52:07.344826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.342 14:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:15.600 [2024-11-02 14:52:07.446913] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:17.499 Initializing NVMe Controllers 00:36:17.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:17.499 controller IO queue size 128 less than required 00:36:17.499 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:17.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:17.499 Initialization complete. Launching workers. 00:36:17.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29264 00:36:17.499 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29321, failed to submit 66 00:36:17.499 success 29264, unsuccessful 57, failed 0 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:17.499 rmmod nvme_tcp 00:36:17.499 rmmod nvme_fabrics 00:36:17.499 rmmod nvme_keyring 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 1539121 ']' 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 1539121 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1539121 ']' 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1539121 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:17.499 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539121 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539121' 00:36:17.769 killing process with pid 1539121 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1539121 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1539121 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:17.769 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:36:18.039 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.039 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.039 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.039 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.039 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:19.941 00:36:19.941 real 0m7.153s 00:36:19.941 user 0m9.015s 00:36:19.941 sys 0m2.861s 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.941 ************************************ 00:36:19.941 END TEST nvmf_abort 00:36:19.941 ************************************ 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:19.941 ************************************ 00:36:19.941 START TEST nvmf_ns_hotplug_stress 00:36:19.941 ************************************ 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:19.941 * Looking for test storage... 00:36:19.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:36:19.941 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:20.200 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.201 --rc genhtml_branch_coverage=1 00:36:20.201 --rc genhtml_function_coverage=1 00:36:20.201 --rc genhtml_legend=1 00:36:20.201 --rc geninfo_all_blocks=1 00:36:20.201 --rc geninfo_unexecuted_blocks=1 00:36:20.201 00:36:20.201 ' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.201 --rc genhtml_branch_coverage=1 00:36:20.201 --rc genhtml_function_coverage=1 00:36:20.201 --rc genhtml_legend=1 00:36:20.201 --rc geninfo_all_blocks=1 00:36:20.201 --rc geninfo_unexecuted_blocks=1 00:36:20.201 00:36:20.201 ' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.201 --rc genhtml_branch_coverage=1 00:36:20.201 --rc genhtml_function_coverage=1 00:36:20.201 --rc genhtml_legend=1 00:36:20.201 --rc geninfo_all_blocks=1 00:36:20.201 --rc geninfo_unexecuted_blocks=1 00:36:20.201 00:36:20.201 ' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.201 --rc genhtml_branch_coverage=1 00:36:20.201 --rc genhtml_function_coverage=1 00:36:20.201 --rc genhtml_legend=1 00:36:20.201 --rc geninfo_all_blocks=1 00:36:20.201 --rc geninfo_unexecuted_blocks=1 00:36:20.201 00:36:20.201 ' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.201 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.202 14:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:22.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:22.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:22.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:22.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.106 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.107 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.373 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:36:22.374 00:36:22.374 --- 10.0.0.2 ping statistics --- 00:36:22.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.374 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:36:22.374 00:36:22.374 --- 10.0.0.1 ping statistics --- 00:36:22.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.374 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=1541460 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 1541460 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1541460 ']' 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:22.374 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.374 [2024-11-02 14:52:14.321511] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:22.374 [2024-11-02 14:52:14.322535] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:22.374 [2024-11-02 14:52:14.322604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.374 [2024-11-02 14:52:14.392426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:22.684 [2024-11-02 14:52:14.487163] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.684 [2024-11-02 14:52:14.487223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.684 [2024-11-02 14:52:14.487247] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.684 [2024-11-02 14:52:14.487269] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.684 [2024-11-02 14:52:14.487282] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.684 [2024-11-02 14:52:14.487354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:22.684 [2024-11-02 14:52:14.487417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.684 [2024-11-02 14:52:14.487419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.684 [2024-11-02 14:52:14.595716] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:22.684 [2024-11-02 14:52:14.595896] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:22.684 [2024-11-02 14:52:14.595898] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:22.684 [2024-11-02 14:52:14.596200] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:22.684 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:22.942 [2024-11-02 14:52:14.904145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.942 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:23.200 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.458 [2024-11-02 14:52:15.460570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.458 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:23.717 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:23.974 Malloc0 00:36:23.975 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:24.540 Delay0 00:36:24.540 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.540 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:24.798 NULL1 00:36:24.798 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:25.363 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1541758 00:36:25.363 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:25.363 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.363 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:26.296 Read completed with error (sct=0, sc=11) 00:36:26.296 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.553 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:26.553 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:26.809 true 00:36:26.809 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:26.809 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.742 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.000 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:28.000 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:28.257 true 00:36:28.257 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:28.257 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.515 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.772 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:28.772 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:29.030 true 00:36:29.030 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:29.030 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.287 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.545 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:29.545 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:29.802 true 00:36:29.802 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:29.803 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.735 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.993 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:30.993 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:31.250 true 00:36:31.250 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:31.250 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.508 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.765 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:31.765 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:32.023 true 00:36:32.023 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:32.023 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.955 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.212 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:33.212 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:33.469 true 00:36:33.469 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:33.469 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.727 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.985 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:33.985 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:34.242 true 00:36:34.242 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:34.242 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.176 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.176 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:35.176 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:35.433 true 00:36:35.691 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:35.691 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.948 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.206 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:36.206 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:36.463 true 00:36:36.463 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:36.463 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.721 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.978 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:36.978 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:37.236 true 00:36:37.236 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:37.236 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.606 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:38.606 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:38.606 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:38.864 true 00:36:38.864 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:38.864 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.122 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.379 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:39.379 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:39.637 true 00:36:39.637 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:39.637 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.894 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.152 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:40.152 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:40.409 true 00:36:40.409 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:40.409 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.342 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.600 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:41.600 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:41.857 true 00:36:41.858 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:41.858 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.423 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.423 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:42.423 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:42.680 true 00:36:42.680 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:42.680 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.938 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.195 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:43.195 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:43.760 true 00:36:43.760 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:43.760 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.692 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.950 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:44.950 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:45.207 true 00:36:45.207 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:45.207 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.464 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.722 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:45.722 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:45.979 true 00:36:45.979 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:45.979 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.245 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.503 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:46.503 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:46.760 true 00:36:46.760 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:46.760 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.691 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.948 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:47.948 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:48.205 true 00:36:48.205 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:48.205 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.462 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.719 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:48.719 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:48.976 true 00:36:48.976 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:48.976 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.234 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.500 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:49.500 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:49.806 true 00:36:49.806 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:49.806 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.738 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.996 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:50.996 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:51.560 true 00:36:51.560 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:51.560 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.560 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.818 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:51.818 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:52.075 true 00:36:52.333 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:52.333 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.591 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.848 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:52.848 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:53.105 true 00:36:53.106 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:53.106 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.038 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.296 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:54.296 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:54.553 true 00:36:54.553 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:54.553 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.811 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.068 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:55.068 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:55.326 true 00:36:55.326 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:55.326 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.326 Initializing NVMe Controllers 00:36:55.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:55.326 Controller IO queue size 128, less than required. 00:36:55.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:55.326 Controller IO queue size 128, less than required. 00:36:55.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:55.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:55.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:55.326 Initialization complete. Launching workers. 00:36:55.326 ======================================================== 00:36:55.326 Latency(us) 00:36:55.326 Device Information : IOPS MiB/s Average min max 00:36:55.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 593.74 0.29 89313.89 3616.44 1016650.43 00:36:55.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8671.92 4.23 14759.96 2263.98 447974.26 00:36:55.326 ======================================================== 00:36:55.326 Total : 9265.66 4.52 19537.33 2263.98 1016650.43 00:36:55.326 00:36:55.584 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.841 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:55.841 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:56.099 true 00:36:56.099 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1541758 00:36:56.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1541758) - No such process 00:36:56.099 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1541758 00:36:56.099 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.357 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.615 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:56.615 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:56.615 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:56.615 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.615 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:56.873 null0 00:36:56.873 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.873 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.873 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:57.131 null1 00:36:57.131 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.131 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.131 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:57.389 null2 00:36:57.389 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.389 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.389 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:57.647 null3 00:36:57.647 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.647 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.647 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:57.911 null4 00:36:57.911 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.911 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.911 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:58.169 null5 00:36:58.169 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.169 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.169 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:58.426 null6 00:36:58.684 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.684 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.684 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:58.943 null7 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:58.943 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1545763 1545764 1545766 1545768 1545770 1545772 1545774 1545776 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.944 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.202 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.461 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.719 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.978 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.236 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.237 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.495 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.753 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.754 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.011 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.012 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.012 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.270 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.529 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.787 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.046 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.304 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.305 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.305 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.305 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.305 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.871 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.129 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.388 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.646 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.904 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.904 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.905 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.163 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.421 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.679 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.680 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.938 rmmod nvme_tcp 00:37:04.938 rmmod nvme_fabrics 00:37:04.938 rmmod nvme_keyring 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 1541460 ']' 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 1541460 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1541460 ']' 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1541460 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541460 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541460' 00:37:04.938 killing process with pid 1541460 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1541460 00:37:04.938 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1541460 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.197 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.099 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.099 00:37:07.099 real 0m47.227s 00:37:07.099 user 3m12.236s 00:37:07.099 sys 0m24.289s 00:37:07.099 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.099 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:07.099 ************************************ 00:37:07.099 END TEST nvmf_ns_hotplug_stress 00:37:07.099 ************************************ 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.358 ************************************ 00:37:07.358 START TEST nvmf_delete_subsystem 00:37:07.358 ************************************ 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:07.358 * Looking for test storage... 00:37:07.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.358 --rc genhtml_branch_coverage=1 00:37:07.358 --rc genhtml_function_coverage=1 00:37:07.358 --rc genhtml_legend=1 00:37:07.358 --rc geninfo_all_blocks=1 00:37:07.358 --rc geninfo_unexecuted_blocks=1 00:37:07.358 00:37:07.358 ' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.358 --rc genhtml_branch_coverage=1 00:37:07.358 --rc genhtml_function_coverage=1 00:37:07.358 --rc genhtml_legend=1 00:37:07.358 --rc geninfo_all_blocks=1 00:37:07.358 --rc geninfo_unexecuted_blocks=1 00:37:07.358 00:37:07.358 ' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.358 --rc genhtml_branch_coverage=1 00:37:07.358 --rc genhtml_function_coverage=1 00:37:07.358 --rc genhtml_legend=1 00:37:07.358 --rc geninfo_all_blocks=1 00:37:07.358 --rc geninfo_unexecuted_blocks=1 00:37:07.358 00:37:07.358 ' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.358 --rc genhtml_branch_coverage=1 00:37:07.358 --rc genhtml_function_coverage=1 00:37:07.358 --rc genhtml_legend=1 00:37:07.358 --rc geninfo_all_blocks=1 00:37:07.358 --rc geninfo_unexecuted_blocks=1 00:37:07.358 00:37:07.358 ' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.358 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.359 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:09.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:09.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:09.259 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:09.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:09.517 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:09.518 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:09.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:37:09.518 00:37:09.518 --- 10.0.0.2 ping statistics --- 00:37:09.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.518 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:37:09.518 00:37:09.518 --- 10.0.0.1 ping statistics --- 00:37:09.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.518 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=1548637 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 1548637 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1548637 ']' 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:09.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:09.518 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.518 [2024-11-02 14:53:01.507774] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:09.518 [2024-11-02 14:53:01.508840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:09.518 [2024-11-02 14:53:01.508894] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:09.776 [2024-11-02 14:53:01.573796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:09.776 [2024-11-02 14:53:01.658842] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:09.776 [2024-11-02 14:53:01.658897] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:09.776 [2024-11-02 14:53:01.658921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:09.776 [2024-11-02 14:53:01.658931] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:09.776 [2024-11-02 14:53:01.658941] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:09.776 [2024-11-02 14:53:01.659033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.776 [2024-11-02 14:53:01.659037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.776 [2024-11-02 14:53:01.741946] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:09.776 [2024-11-02 14:53:01.742008] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:09.776 [2024-11-02 14:53:01.742267] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.776 [2024-11-02 14:53:01.791674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.776 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.776 [2024-11-02 14:53:01.823894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.777 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.777 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:09.777 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.777 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.034 NULL1 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.034 Delay0 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1548662 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:10.034 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:10.034 [2024-11-02 14:53:01.891682] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:11.931 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.931 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.931 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.189 Read completed with error (sct=0, sc=8) 00:37:12.189 Read completed with error (sct=0, sc=8) 00:37:12.189 Read completed with error (sct=0, sc=8) 00:37:12.189 Write completed with error (sct=0, sc=8) 00:37:12.189 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 [2024-11-02 14:53:04.017169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bed0 is same with the state(6) to be set 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 starting I/O failed: -6 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 [2024-11-02 14:53:04.018041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa6d8000c00 is same with the state(6) to be set 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Read completed with error (sct=0, sc=8) 00:37:12.190 Write completed with error (sct=0, sc=8) 00:37:12.190 [2024-11-02 14:53:04.018530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c290 is same with the state(6) to be set 00:37:13.124 [2024-11-02 14:53:04.993338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2189d00 is same with the state(6) to be set 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 [2024-11-02 14:53:05.015952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa6d800cfe0 is same with the state(6) to be set 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 [2024-11-02 14:53:05.021947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c5c0 is same with the state(6) to be set 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.124 Write completed with error (sct=0, sc=8) 00:37:13.124 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 [2024-11-02 14:53:05.022104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c0b0 is same with the state(6) to be set 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Read completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 Write completed with error (sct=0, sc=8) 00:37:13.125 [2024-11-02 14:53:05.022949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa6d800d640 is same with the state(6) to be set 00:37:13.125 Initializing NVMe Controllers 00:37:13.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:13.125 Controller IO queue size 128, less than required. 00:37:13.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:13.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:13.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:13.125 Initialization complete. Launching workers. 00:37:13.125 ======================================================== 00:37:13.125 Latency(us) 00:37:13.125 Device Information : IOPS MiB/s Average min max 00:37:13.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.26 0.08 915916.98 1363.09 1047137.44 00:37:13.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.31 0.08 929878.27 489.76 1012980.58 00:37:13.125 ======================================================== 00:37:13.125 Total : 316.57 0.15 922766.33 489.76 1047137.44 00:37:13.125 00:37:13.125 [2024-11-02 14:53:05.023467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2189d00 (9): Bad file descriptor 00:37:13.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:13.125 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.125 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:13.125 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1548662 00:37:13.125 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1548662 00:37:13.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1548662) - No such process 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1548662 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1548662 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:13.746 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1548662 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:13.747 [2024-11-02 14:53:05.543852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1549068 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:13.747 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.747 [2024-11-02 14:53:05.594924] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:14.005 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:14.005 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:14.005 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:14.570 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:14.570 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:14.570 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:15.136 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:15.136 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:15.136 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:15.701 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:15.701 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:15.701 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.267 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.267 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:16.267 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.524 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.524 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:16.524 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.782 Initializing NVMe Controllers 00:37:16.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:16.782 Controller IO queue size 128, less than required. 00:37:16.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:16.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:16.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:16.782 Initialization complete. Launching workers. 00:37:16.782 ======================================================== 00:37:16.782 Latency(us) 00:37:16.782 Device Information : IOPS MiB/s Average min max 00:37:16.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004525.24 1000250.72 1011495.66 00:37:16.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004104.72 1000237.09 1041069.35 00:37:16.782 ======================================================== 00:37:16.782 Total : 256.00 0.12 1004314.98 1000237.09 1041069.35 00:37:16.782 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549068 00:37:17.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1549068) - No such process 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1549068 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.040 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.040 rmmod nvme_tcp 00:37:17.298 rmmod nvme_fabrics 00:37:17.298 rmmod nvme_keyring 00:37:17.298 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 1548637 ']' 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 1548637 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1548637 ']' 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1548637 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548637 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548637' 00:37:17.299 killing process with pid 1548637 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1548637 00:37:17.299 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1548637 00:37:17.557 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.558 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.458 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:19.458 00:37:19.458 real 0m12.298s 00:37:19.458 user 0m24.497s 00:37:19.459 sys 0m3.785s 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.459 ************************************ 00:37:19.459 END TEST nvmf_delete_subsystem 00:37:19.459 ************************************ 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:19.459 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:19.718 ************************************ 00:37:19.718 START TEST nvmf_host_management 00:37:19.718 ************************************ 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:19.718 * Looking for test storage... 00:37:19.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.718 --rc genhtml_branch_coverage=1 00:37:19.718 --rc genhtml_function_coverage=1 00:37:19.718 --rc genhtml_legend=1 00:37:19.718 --rc geninfo_all_blocks=1 00:37:19.718 --rc geninfo_unexecuted_blocks=1 00:37:19.718 00:37:19.718 ' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.718 --rc genhtml_branch_coverage=1 00:37:19.718 --rc genhtml_function_coverage=1 00:37:19.718 --rc genhtml_legend=1 00:37:19.718 --rc geninfo_all_blocks=1 00:37:19.718 --rc geninfo_unexecuted_blocks=1 00:37:19.718 00:37:19.718 ' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.718 --rc genhtml_branch_coverage=1 00:37:19.718 --rc genhtml_function_coverage=1 00:37:19.718 --rc genhtml_legend=1 00:37:19.718 --rc geninfo_all_blocks=1 00:37:19.718 --rc geninfo_unexecuted_blocks=1 00:37:19.718 00:37:19.718 ' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.718 --rc genhtml_branch_coverage=1 00:37:19.718 --rc genhtml_function_coverage=1 00:37:19.718 --rc genhtml_legend=1 00:37:19.718 --rc geninfo_all_blocks=1 00:37:19.718 --rc geninfo_unexecuted_blocks=1 00:37:19.718 00:37:19.718 ' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.718 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:19.719 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:22.251 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:22.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:22.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:22.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:22.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:22.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:22.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:37:22.252 00:37:22.252 --- 10.0.0.2 ping statistics --- 00:37:22.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.252 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:22.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:22.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:37:22.252 00:37:22.252 --- 10.0.0.1 ping statistics --- 00:37:22.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.252 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:22.252 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=1551517 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 1551517 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1551517 ']' 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.253 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.253 [2024-11-02 14:53:14.008122] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:22.253 [2024-11-02 14:53:14.009250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:22.253 [2024-11-02 14:53:14.009317] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.253 [2024-11-02 14:53:14.075229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:22.253 [2024-11-02 14:53:14.166152] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.253 [2024-11-02 14:53:14.166208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.253 [2024-11-02 14:53:14.166222] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.253 [2024-11-02 14:53:14.166233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.253 [2024-11-02 14:53:14.166247] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.253 [2024-11-02 14:53:14.166356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:22.253 [2024-11-02 14:53:14.166442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:22.253 [2024-11-02 14:53:14.166419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:22.253 [2024-11-02 14:53:14.166445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.253 [2024-11-02 14:53:14.266739] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:22.253 [2024-11-02 14:53:14.266917] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:22.253 [2024-11-02 14:53:14.267233] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:22.253 [2024-11-02 14:53:14.267888] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:22.253 [2024-11-02 14:53:14.268149] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:22.253 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:22.253 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:22.253 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:22.253 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.253 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 [2024-11-02 14:53:14.319195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 Malloc0 00:37:22.512 [2024-11-02 14:53:14.383338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1551567 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1551567 /var/tmp/bdevperf.sock 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1551567 ']' 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:22.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:22.512 { 00:37:22.512 "params": { 00:37:22.512 "name": "Nvme$subsystem", 00:37:22.512 "trtype": "$TEST_TRANSPORT", 00:37:22.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.512 "adrfam": "ipv4", 00:37:22.512 "trsvcid": "$NVMF_PORT", 00:37:22.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.512 "hdgst": ${hdgst:-false}, 00:37:22.512 "ddgst": ${ddgst:-false} 00:37:22.512 }, 00:37:22.512 "method": "bdev_nvme_attach_controller" 00:37:22.512 } 00:37:22.512 EOF 00:37:22.512 )") 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:22.512 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:22.512 "params": { 00:37:22.512 "name": "Nvme0", 00:37:22.512 "trtype": "tcp", 00:37:22.512 "traddr": "10.0.0.2", 00:37:22.512 "adrfam": "ipv4", 00:37:22.512 "trsvcid": "4420", 00:37:22.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.512 "hdgst": false, 00:37:22.512 "ddgst": false 00:37:22.512 }, 00:37:22.512 "method": "bdev_nvme_attach_controller" 00:37:22.512 }' 00:37:22.512 [2024-11-02 14:53:14.460053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:22.512 [2024-11-02 14:53:14.460141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551567 ] 00:37:22.512 [2024-11-02 14:53:14.523338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.771 [2024-11-02 14:53:14.611360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.029 Running I/O for 10 seconds... 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=64 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 64 -ge 100 ']' 00:37:23.029 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:23.289 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:23.289 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:23.289 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.290 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.290 [2024-11-02 14:53:15.239144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.239826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec51b0 is same with the state(6) to be set 00:37:23.290 [2024-11-02 14:53:15.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.290 [2024-11-02 14:53:15.242695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.290 [2024-11-02 14:53:15.242710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.291 [2024-11-02 14:53:15.243591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:23.291 [2024-11-02 14:53:15.243749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 [2024-11-02 14:53:15.243852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.291 [2024-11-02 14:53:15.243871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.291 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.291 [2024-11-02 14:53:15.243885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.243901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.243914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.243929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.243943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.243958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.243972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.292 [2024-11-02 14:53:15.243987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.292 [2024-11-02 14:53:15.244277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.292 [2024-11-02 14:53:15.244378] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e9be10 was disconnected and freed. reset controller. 00:37:23.292 [2024-11-02 14:53:15.245544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:23.292 task offset: 67584 on job bdev=Nvme0n1 fails 00:37:23.292 00:37:23.292 Latency(us) 00:37:23.292 [2024-11-02T13:53:15.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.292 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:23.292 Job: Nvme0n1 ended in about 0.38 seconds with error 00:37:23.292 Verification LBA range: start 0x0 length 0x400 00:37:23.292 Nvme0n1 : 0.38 1386.21 86.64 168.03 0.00 39979.43 2500.08 34369.99 00:37:23.292 [2024-11-02T13:53:15.347Z] =================================================================================================================== 00:37:23.292 [2024-11-02T13:53:15.347Z] Total : 1386.21 86.64 168.03 0.00 39979.43 2500.08 34369.99 00:37:23.292 [2024-11-02 14:53:15.247725] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:23.292 [2024-11-02 14:53:15.247766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83090 (9): Bad file descriptor 00:37:23.292 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.292 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:23.292 [2024-11-02 14:53:15.340428] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1551567 00:37:24.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1551567) - No such process 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:24.226 { 00:37:24.226 "params": { 00:37:24.226 "name": "Nvme$subsystem", 00:37:24.226 "trtype": "$TEST_TRANSPORT", 00:37:24.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.226 "adrfam": "ipv4", 00:37:24.226 "trsvcid": "$NVMF_PORT", 00:37:24.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.226 "hdgst": ${hdgst:-false}, 00:37:24.226 "ddgst": ${ddgst:-false} 00:37:24.226 }, 00:37:24.226 "method": "bdev_nvme_attach_controller" 00:37:24.226 } 00:37:24.226 EOF 00:37:24.226 )") 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:24.226 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:24.226 "params": { 00:37:24.226 "name": "Nvme0", 00:37:24.226 "trtype": "tcp", 00:37:24.226 "traddr": "10.0.0.2", 00:37:24.226 "adrfam": "ipv4", 00:37:24.226 "trsvcid": "4420", 00:37:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.227 "hdgst": false, 00:37:24.227 "ddgst": false 00:37:24.227 }, 00:37:24.227 "method": "bdev_nvme_attach_controller" 00:37:24.227 }' 00:37:24.485 [2024-11-02 14:53:16.302362] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:24.485 [2024-11-02 14:53:16.302439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551835 ] 00:37:24.485 [2024-11-02 14:53:16.363261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.485 [2024-11-02 14:53:16.450745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.743 Running I/O for 1 seconds... 00:37:26.117 1483.00 IOPS, 92.69 MiB/s 00:37:26.117 Latency(us) 00:37:26.117 [2024-11-02T13:53:18.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.117 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:26.117 Verification LBA range: start 0x0 length 0x400 00:37:26.117 Nvme0n1 : 1.01 1536.17 96.01 0.00 0.00 40840.77 2160.26 33593.27 00:37:26.117 [2024-11-02T13:53:18.172Z] =================================================================================================================== 00:37:26.117 [2024-11-02T13:53:18.172Z] Total : 1536.17 96.01 0.00 0.00 40840.77 2160.26 33593.27 00:37:26.117 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:26.117 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.117 rmmod nvme_tcp 00:37:26.117 rmmod nvme_fabrics 00:37:26.117 rmmod nvme_keyring 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 1551517 ']' 00:37:26.117 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 1551517 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1551517 ']' 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1551517 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1551517 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1551517' 00:37:26.118 killing process with pid 1551517 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1551517 00:37:26.118 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1551517 00:37:26.376 [2024-11-02 14:53:18.334811] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:37:26.376 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.377 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.377 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.377 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.377 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:28.909 00:37:28.909 real 0m8.898s 00:37:28.909 user 0m17.801s 00:37:28.909 sys 0m3.779s 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.909 ************************************ 00:37:28.909 END TEST nvmf_host_management 00:37:28.909 ************************************ 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:28.909 ************************************ 00:37:28.909 START TEST nvmf_lvol 00:37:28.909 ************************************ 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.909 * Looking for test storage... 00:37:28.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:28.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.909 --rc genhtml_branch_coverage=1 00:37:28.909 --rc genhtml_function_coverage=1 00:37:28.909 --rc genhtml_legend=1 00:37:28.909 --rc geninfo_all_blocks=1 00:37:28.909 --rc geninfo_unexecuted_blocks=1 00:37:28.909 00:37:28.909 ' 00:37:28.909 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:28.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.909 --rc genhtml_branch_coverage=1 00:37:28.909 --rc genhtml_function_coverage=1 00:37:28.909 --rc genhtml_legend=1 00:37:28.909 --rc geninfo_all_blocks=1 00:37:28.909 --rc geninfo_unexecuted_blocks=1 00:37:28.909 00:37:28.909 ' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:28.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.910 --rc genhtml_branch_coverage=1 00:37:28.910 --rc genhtml_function_coverage=1 00:37:28.910 --rc genhtml_legend=1 00:37:28.910 --rc geninfo_all_blocks=1 00:37:28.910 --rc geninfo_unexecuted_blocks=1 00:37:28.910 00:37:28.910 ' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:28.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.910 --rc genhtml_branch_coverage=1 00:37:28.910 --rc genhtml_function_coverage=1 00:37:28.910 --rc genhtml_legend=1 00:37:28.910 --rc geninfo_all_blocks=1 00:37:28.910 --rc geninfo_unexecuted_blocks=1 00:37:28.910 00:37:28.910 ' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:28.910 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:30.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:30.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:30.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:30.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:30.810 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:30.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:37:30.811 00:37:30.811 --- 10.0.0.2 ping statistics --- 00:37:30.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.811 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:37:30.811 00:37:30.811 --- 10.0.0.1 ping statistics --- 00:37:30.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.811 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:30.811 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=1553921 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 1553921 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1553921 ']' 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:31.070 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.070 [2024-11-02 14:53:22.925876] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:31.070 [2024-11-02 14:53:22.927212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:31.070 [2024-11-02 14:53:22.927295] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.070 [2024-11-02 14:53:23.009388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:31.070 [2024-11-02 14:53:23.103002] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.070 [2024-11-02 14:53:23.103065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.070 [2024-11-02 14:53:23.103078] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.070 [2024-11-02 14:53:23.103090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.070 [2024-11-02 14:53:23.103100] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.070 [2024-11-02 14:53:23.103165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.070 [2024-11-02 14:53:23.103240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:31.070 [2024-11-02 14:53:23.103244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.329 [2024-11-02 14:53:23.208654] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:31.329 [2024-11-02 14:53:23.208886] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:31.329 [2024-11-02 14:53:23.208904] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:31.329 [2024-11-02 14:53:23.209168] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:31.329 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:31.586 [2024-11-02 14:53:23.503961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:31.586 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.845 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:31.845 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:32.103 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:32.103 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:32.361 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:32.928 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=039ffe1e-4638-4c38-9b1b-7245a2a90bc5 00:37:32.928 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 039ffe1e-4638-4c38-9b1b-7245a2a90bc5 lvol 20 00:37:33.186 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=45b10f6b-cecc-4f2d-8610-f2a2ff3172e2 00:37:33.186 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:33.444 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45b10f6b-cecc-4f2d-8610-f2a2ff3172e2 00:37:33.702 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:33.960 [2024-11-02 14:53:25.784185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.960 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.219 14:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1554346 00:37:34.219 14:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:34.219 14:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:35.154 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 45b10f6b-cecc-4f2d-8610-f2a2ff3172e2 MY_SNAPSHOT 00:37:35.413 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=01e4f22b-afd9-4339-aac9-38fb98a4c74f 00:37:35.413 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 45b10f6b-cecc-4f2d-8610-f2a2ff3172e2 30 00:37:35.671 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 01e4f22b-afd9-4339-aac9-38fb98a4c74f MY_CLONE 00:37:36.238 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=12212ae2-6144-4bdd-b973-43a6fd0a3273 00:37:36.238 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 12212ae2-6144-4bdd-b973-43a6fd0a3273 00:37:36.805 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1554346 00:37:44.924 Initializing NVMe Controllers 00:37:44.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:44.924 Controller IO queue size 128, less than required. 00:37:44.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:44.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:44.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:44.924 Initialization complete. Launching workers. 00:37:44.924 ======================================================== 00:37:44.924 Latency(us) 00:37:44.924 Device Information : IOPS MiB/s Average min max 00:37:44.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10536.60 41.16 12151.36 1778.97 63102.56 00:37:44.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9806.30 38.31 13059.74 2417.06 55198.89 00:37:44.924 ======================================================== 00:37:44.924 Total : 20342.90 79.46 12589.25 1778.97 63102.56 00:37:44.924 00:37:44.924 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.924 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 45b10f6b-cecc-4f2d-8610-f2a2ff3172e2 00:37:45.182 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 039ffe1e-4638-4c38-9b1b-7245a2a90bc5 00:37:45.440 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.441 rmmod nvme_tcp 00:37:45.441 rmmod nvme_fabrics 00:37:45.441 rmmod nvme_keyring 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 1553921 ']' 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 1553921 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1553921 ']' 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1553921 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553921 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553921' 00:37:45.441 killing process with pid 1553921 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1553921 00:37:45.441 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1553921 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.015 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.938 00:37:47.938 real 0m19.366s 00:37:47.938 user 0m56.327s 00:37:47.938 sys 0m8.071s 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:47.938 ************************************ 00:37:47.938 END TEST nvmf_lvol 00:37:47.938 ************************************ 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:47.938 ************************************ 00:37:47.938 START TEST nvmf_lvs_grow 00:37:47.938 ************************************ 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:47.938 * Looking for test storage... 00:37:47.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:37:47.938 14:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.197 --rc genhtml_branch_coverage=1 00:37:48.197 --rc genhtml_function_coverage=1 00:37:48.197 --rc genhtml_legend=1 00:37:48.197 --rc geninfo_all_blocks=1 00:37:48.197 --rc geninfo_unexecuted_blocks=1 00:37:48.197 00:37:48.197 ' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.197 --rc genhtml_branch_coverage=1 00:37:48.197 --rc genhtml_function_coverage=1 00:37:48.197 --rc genhtml_legend=1 00:37:48.197 --rc geninfo_all_blocks=1 00:37:48.197 --rc geninfo_unexecuted_blocks=1 00:37:48.197 00:37:48.197 ' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.197 --rc genhtml_branch_coverage=1 00:37:48.197 --rc genhtml_function_coverage=1 00:37:48.197 --rc genhtml_legend=1 00:37:48.197 --rc geninfo_all_blocks=1 00:37:48.197 --rc geninfo_unexecuted_blocks=1 00:37:48.197 00:37:48.197 ' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.197 --rc genhtml_branch_coverage=1 00:37:48.197 --rc genhtml_function_coverage=1 00:37:48.197 --rc genhtml_legend=1 00:37:48.197 --rc geninfo_all_blocks=1 00:37:48.197 --rc geninfo_unexecuted_blocks=1 00:37:48.197 00:37:48.197 ' 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.197 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.198 14:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.100 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:50.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:50.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:50.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:50.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.101 14:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.101 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:50.360 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:50.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:37:50.360 00:37:50.360 --- 10.0.0.2 ping statistics --- 00:37:50.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.360 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:37:50.361 00:37:50.361 --- 10.0.0.1 ping statistics --- 00:37:50.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.361 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=1557653 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 1557653 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1557653 ']' 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.361 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.361 [2024-11-02 14:53:42.333863] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:50.361 [2024-11-02 14:53:42.334950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:50.361 [2024-11-02 14:53:42.335020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:50.361 [2024-11-02 14:53:42.405297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.620 [2024-11-02 14:53:42.495727] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:50.620 [2024-11-02 14:53:42.495793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:50.620 [2024-11-02 14:53:42.495820] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:50.620 [2024-11-02 14:53:42.495833] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:50.620 [2024-11-02 14:53:42.495845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:50.620 [2024-11-02 14:53:42.495878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.620 [2024-11-02 14:53:42.585603] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:50.620 [2024-11-02 14:53:42.585978] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.620 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:50.878 [2024-11-02 14:53:42.888496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.878 ************************************ 00:37:50.878 START TEST lvs_grow_clean 00:37:50.878 ************************************ 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:50.878 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.136 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:51.136 14:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:51.394 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:51.394 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:51.653 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:37:51.653 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:37:51.653 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:51.911 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:51.911 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:51.911 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 lvol 150 00:37:52.169 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b 00:37:52.169 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:52.169 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:52.428 [2024-11-02 14:53:44.296376] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:52.428 [2024-11-02 14:53:44.296480] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:52.428 true 00:37:52.428 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:52.428 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:37:52.686 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:52.686 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:52.944 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b 00:37:53.203 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:53.461 [2024-11-02 14:53:45.396758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.461 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1558060 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1558060 /var/tmp/bdevperf.sock 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1558060 ']' 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:53.719 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:53.719 [2024-11-02 14:53:45.737876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:53.720 [2024-11-02 14:53:45.737952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558060 ] 00:37:53.978 [2024-11-02 14:53:45.797133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.978 [2024-11-02 14:53:45.883408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.978 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:53.978 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:53.978 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:54.545 Nvme0n1 00:37:54.545 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:54.803 [ 00:37:54.803 { 00:37:54.803 "name": "Nvme0n1", 00:37:54.803 "aliases": [ 00:37:54.803 "0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b" 00:37:54.803 ], 00:37:54.803 "product_name": "NVMe disk", 00:37:54.803 "block_size": 4096, 00:37:54.803 "num_blocks": 38912, 00:37:54.803 "uuid": "0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b", 00:37:54.803 "numa_id": 0, 00:37:54.803 "assigned_rate_limits": { 00:37:54.803 "rw_ios_per_sec": 0, 00:37:54.803 "rw_mbytes_per_sec": 0, 00:37:54.803 "r_mbytes_per_sec": 0, 00:37:54.803 "w_mbytes_per_sec": 0 00:37:54.803 }, 00:37:54.803 "claimed": false, 00:37:54.803 "zoned": false, 00:37:54.803 "supported_io_types": { 00:37:54.803 "read": true, 00:37:54.803 "write": true, 00:37:54.803 "unmap": true, 00:37:54.803 "flush": true, 00:37:54.803 "reset": true, 00:37:54.803 "nvme_admin": true, 00:37:54.803 "nvme_io": true, 00:37:54.803 "nvme_io_md": false, 00:37:54.803 "write_zeroes": true, 00:37:54.803 "zcopy": false, 00:37:54.803 "get_zone_info": false, 00:37:54.803 "zone_management": false, 00:37:54.803 "zone_append": false, 00:37:54.803 "compare": true, 00:37:54.803 "compare_and_write": true, 00:37:54.803 "abort": true, 00:37:54.803 "seek_hole": false, 00:37:54.803 "seek_data": false, 00:37:54.803 "copy": true, 00:37:54.803 "nvme_iov_md": false 00:37:54.803 }, 00:37:54.803 "memory_domains": [ 00:37:54.803 { 00:37:54.803 "dma_device_id": "system", 00:37:54.803 "dma_device_type": 1 00:37:54.803 } 00:37:54.803 ], 00:37:54.803 "driver_specific": { 00:37:54.803 "nvme": [ 00:37:54.803 { 00:37:54.803 "trid": { 00:37:54.803 "trtype": "TCP", 00:37:54.803 "adrfam": "IPv4", 00:37:54.803 "traddr": "10.0.0.2", 00:37:54.803 "trsvcid": "4420", 00:37:54.803 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:54.803 }, 00:37:54.803 "ctrlr_data": { 00:37:54.803 "cntlid": 1, 00:37:54.803 "vendor_id": "0x8086", 00:37:54.803 "model_number": "SPDK bdev Controller", 00:37:54.803 "serial_number": "SPDK0", 00:37:54.803 "firmware_revision": "24.09.1", 00:37:54.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.803 "oacs": { 00:37:54.803 "security": 0, 00:37:54.803 "format": 0, 00:37:54.803 "firmware": 0, 00:37:54.803 "ns_manage": 0 00:37:54.803 }, 00:37:54.803 "multi_ctrlr": true, 00:37:54.803 "ana_reporting": false 00:37:54.803 }, 00:37:54.803 "vs": { 00:37:54.803 "nvme_version": "1.3" 00:37:54.803 }, 00:37:54.803 "ns_data": { 00:37:54.803 "id": 1, 00:37:54.803 "can_share": true 00:37:54.803 } 00:37:54.803 } 00:37:54.803 ], 00:37:54.803 "mp_policy": "active_passive" 00:37:54.803 } 00:37:54.803 } 00:37:54.803 ] 00:37:54.803 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1558175 00:37:54.803 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:54.803 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:55.062 Running I/O for 10 seconds... 00:37:55.997 Latency(us) 00:37:55.997 [2024-11-02T13:53:48.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.997 Nvme0n1 : 1.00 13696.00 53.50 0.00 0.00 0.00 0.00 0.00 00:37:55.997 [2024-11-02T13:53:48.052Z] =================================================================================================================== 00:37:55.997 [2024-11-02T13:53:48.052Z] Total : 13696.00 53.50 0.00 0.00 0.00 0.00 0.00 00:37:55.997 00:37:56.932 14:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:37:56.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.932 Nvme0n1 : 2.00 13855.50 54.12 0.00 0.00 0.00 0.00 0.00 00:37:56.932 [2024-11-02T13:53:48.987Z] =================================================================================================================== 00:37:56.932 [2024-11-02T13:53:48.987Z] Total : 13855.50 54.12 0.00 0.00 0.00 0.00 0.00 00:37:56.932 00:37:57.191 true 00:37:57.191 14:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:37:57.191 14:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:57.449 14:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:57.449 14:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:57.449 14:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1558175 00:37:58.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.016 Nvme0n1 : 3.00 13928.33 54.41 0.00 0.00 0.00 0.00 0.00 00:37:58.016 [2024-11-02T13:53:50.071Z] =================================================================================================================== 00:37:58.016 [2024-11-02T13:53:50.071Z] Total : 13928.33 54.41 0.00 0.00 0.00 0.00 0.00 00:37:58.016 00:37:58.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.952 Nvme0n1 : 4.00 14012.00 54.73 0.00 0.00 0.00 0.00 0.00 00:37:58.952 [2024-11-02T13:53:51.007Z] =================================================================================================================== 00:37:58.952 [2024-11-02T13:53:51.007Z] Total : 14012.00 54.73 0.00 0.00 0.00 0.00 0.00 00:37:58.952 00:37:59.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.887 Nvme0n1 : 5.00 14042.80 54.85 0.00 0.00 0.00 0.00 0.00 00:37:59.887 [2024-11-02T13:53:51.942Z] =================================================================================================================== 00:37:59.887 [2024-11-02T13:53:51.942Z] Total : 14042.80 54.85 0.00 0.00 0.00 0.00 0.00 00:37:59.887 00:38:01.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.263 Nvme0n1 : 6.00 14035.67 54.83 0.00 0.00 0.00 0.00 0.00 00:38:01.263 [2024-11-02T13:53:53.318Z] =================================================================================================================== 00:38:01.263 [2024-11-02T13:53:53.318Z] Total : 14035.67 54.83 0.00 0.00 0.00 0.00 0.00 00:38:01.263 00:38:02.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.198 Nvme0n1 : 7.00 14042.00 54.85 0.00 0.00 0.00 0.00 0.00 00:38:02.198 [2024-11-02T13:53:54.253Z] =================================================================================================================== 00:38:02.198 [2024-11-02T13:53:54.253Z] Total : 14042.00 54.85 0.00 0.00 0.00 0.00 0.00 00:38:02.198 00:38:03.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.133 Nvme0n1 : 8.00 14054.62 54.90 0.00 0.00 0.00 0.00 0.00 00:38:03.133 [2024-11-02T13:53:55.188Z] =================================================================================================================== 00:38:03.133 [2024-11-02T13:53:55.188Z] Total : 14054.62 54.90 0.00 0.00 0.00 0.00 0.00 00:38:03.133 00:38:04.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.069 Nvme0n1 : 9.00 14064.67 54.94 0.00 0.00 0.00 0.00 0.00 00:38:04.069 [2024-11-02T13:53:56.124Z] =================================================================================================================== 00:38:04.069 [2024-11-02T13:53:56.124Z] Total : 14064.67 54.94 0.00 0.00 0.00 0.00 0.00 00:38:04.069 00:38:05.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.003 Nvme0n1 : 10.00 14066.20 54.95 0.00 0.00 0.00 0.00 0.00 00:38:05.003 [2024-11-02T13:53:57.058Z] =================================================================================================================== 00:38:05.003 [2024-11-02T13:53:57.058Z] Total : 14066.20 54.95 0.00 0.00 0.00 0.00 0.00 00:38:05.003 00:38:05.003 00:38:05.003 Latency(us) 00:38:05.003 [2024-11-02T13:53:57.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.003 Nvme0n1 : 10.01 14068.89 54.96 0.00 0.00 9092.80 5412.79 19612.25 00:38:05.003 [2024-11-02T13:53:57.058Z] =================================================================================================================== 00:38:05.003 [2024-11-02T13:53:57.058Z] Total : 14068.89 54.96 0.00 0.00 9092.80 5412.79 19612.25 00:38:05.003 { 00:38:05.003 "results": [ 00:38:05.003 { 00:38:05.003 "job": "Nvme0n1", 00:38:05.003 "core_mask": "0x2", 00:38:05.003 "workload": "randwrite", 00:38:05.003 "status": "finished", 00:38:05.003 "queue_depth": 128, 00:38:05.003 "io_size": 4096, 00:38:05.003 "runtime": 10.007186, 00:38:05.003 "iops": 14068.890095577319, 00:38:05.003 "mibps": 54.9566019358489, 00:38:05.003 "io_failed": 0, 00:38:05.003 "io_timeout": 0, 00:38:05.003 "avg_latency_us": 9092.801356493648, 00:38:05.003 "min_latency_us": 5412.788148148148, 00:38:05.003 "max_latency_us": 19612.254814814816 00:38:05.003 } 00:38:05.003 ], 00:38:05.003 "core_count": 1 00:38:05.003 } 00:38:05.003 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1558060 00:38:05.003 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1558060 ']' 00:38:05.003 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1558060 00:38:05.004 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:38:05.004 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:05.004 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558060 00:38:05.004 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:05.004 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:05.004 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558060' 00:38:05.004 killing process with pid 1558060 00:38:05.004 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1558060 00:38:05.004 Received shutdown signal, test time was about 10.000000 seconds 00:38:05.004 00:38:05.004 Latency(us) 00:38:05.004 [2024-11-02T13:53:57.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.004 [2024-11-02T13:53:57.059Z] =================================================================================================================== 00:38:05.004 [2024-11-02T13:53:57.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:05.004 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1558060 00:38:05.262 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:05.521 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:05.779 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:05.779 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:06.038 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:06.038 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:06.038 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:06.296 [2024-11-02 14:53:58.300450] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:06.296 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:06.555 request: 00:38:06.555 { 00:38:06.555 "uuid": "16f76c14-c7cb-4926-abe8-fa44c39df8a5", 00:38:06.555 "method": "bdev_lvol_get_lvstores", 00:38:06.555 "req_id": 1 00:38:06.555 } 00:38:06.555 Got JSON-RPC error response 00:38:06.555 response: 00:38:06.555 { 00:38:06.555 "code": -19, 00:38:06.555 "message": "No such device" 00:38:06.555 } 00:38:06.555 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:06.555 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:06.555 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:06.555 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:06.555 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:07.122 aio_bdev 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:07.122 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:07.122 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b -t 2000 00:38:07.381 [ 00:38:07.381 { 00:38:07.381 "name": "0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b", 00:38:07.381 "aliases": [ 00:38:07.381 "lvs/lvol" 00:38:07.381 ], 00:38:07.381 "product_name": "Logical Volume", 00:38:07.381 "block_size": 4096, 00:38:07.381 "num_blocks": 38912, 00:38:07.381 "uuid": "0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b", 00:38:07.381 "assigned_rate_limits": { 00:38:07.381 "rw_ios_per_sec": 0, 00:38:07.381 "rw_mbytes_per_sec": 0, 00:38:07.381 "r_mbytes_per_sec": 0, 00:38:07.381 "w_mbytes_per_sec": 0 00:38:07.381 }, 00:38:07.381 "claimed": false, 00:38:07.381 "zoned": false, 00:38:07.381 "supported_io_types": { 00:38:07.381 "read": true, 00:38:07.381 "write": true, 00:38:07.381 "unmap": true, 00:38:07.381 "flush": false, 00:38:07.381 "reset": true, 00:38:07.381 "nvme_admin": false, 00:38:07.381 "nvme_io": false, 00:38:07.381 "nvme_io_md": false, 00:38:07.381 "write_zeroes": true, 00:38:07.381 "zcopy": false, 00:38:07.381 "get_zone_info": false, 00:38:07.381 "zone_management": false, 00:38:07.381 "zone_append": false, 00:38:07.381 "compare": false, 00:38:07.381 "compare_and_write": false, 00:38:07.381 "abort": false, 00:38:07.381 "seek_hole": true, 00:38:07.381 "seek_data": true, 00:38:07.381 "copy": false, 00:38:07.381 "nvme_iov_md": false 00:38:07.381 }, 00:38:07.381 "driver_specific": { 00:38:07.381 "lvol": { 00:38:07.381 "lvol_store_uuid": "16f76c14-c7cb-4926-abe8-fa44c39df8a5", 00:38:07.381 "base_bdev": "aio_bdev", 00:38:07.381 "thin_provision": false, 00:38:07.381 "num_allocated_clusters": 38, 00:38:07.381 "snapshot": false, 00:38:07.381 "clone": false, 00:38:07.381 "esnap_clone": false 00:38:07.381 } 00:38:07.381 } 00:38:07.381 } 00:38:07.381 ] 00:38:07.381 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:38:07.381 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:07.381 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:07.948 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:07.948 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:07.948 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:07.948 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:07.948 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0e5b1bf0-bd6a-4d75-807c-6a1f11c9201b 00:38:08.206 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16f76c14-c7cb-4926-abe8-fa44c39df8a5 00:38:08.774 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.033 00:38:09.033 real 0m17.933s 00:38:09.033 user 0m17.546s 00:38:09.033 sys 0m1.910s 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:09.033 ************************************ 00:38:09.033 END TEST lvs_grow_clean 00:38:09.033 ************************************ 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:09.033 ************************************ 00:38:09.033 START TEST lvs_grow_dirty 00:38:09.033 ************************************ 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.033 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.291 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:09.291 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:09.549 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:09.549 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:09.549 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:09.807 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:09.807 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:09.807 14:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 lvol 150 00:38:10.066 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c87f1518-6698-457a-8541-b678b51f2b54 00:38:10.066 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.066 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:10.325 [2024-11-02 14:54:02.340387] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:10.325 [2024-11-02 14:54:02.340502] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:10.325 true 00:38:10.325 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:10.325 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:10.583 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:10.583 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:11.150 14:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c87f1518-6698-457a-8541-b678b51f2b54 00:38:11.150 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.408 [2024-11-02 14:54:03.424679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.408 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1560304 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1560304 /var/tmp/bdevperf.sock 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1560304 ']' 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:11.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:11.667 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:11.954 [2024-11-02 14:54:03.746660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:11.954 [2024-11-02 14:54:03.746738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560304 ] 00:38:11.954 [2024-11-02 14:54:03.805463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.954 [2024-11-02 14:54:03.892991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.255 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:12.255 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:12.255 14:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:12.513 Nvme0n1 00:38:12.513 14:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:12.772 [ 00:38:12.772 { 00:38:12.772 "name": "Nvme0n1", 00:38:12.772 "aliases": [ 00:38:12.772 "c87f1518-6698-457a-8541-b678b51f2b54" 00:38:12.772 ], 00:38:12.772 "product_name": "NVMe disk", 00:38:12.772 "block_size": 4096, 00:38:12.772 "num_blocks": 38912, 00:38:12.772 "uuid": "c87f1518-6698-457a-8541-b678b51f2b54", 00:38:12.772 "numa_id": 0, 00:38:12.772 "assigned_rate_limits": { 00:38:12.772 "rw_ios_per_sec": 0, 00:38:12.772 "rw_mbytes_per_sec": 0, 00:38:12.772 "r_mbytes_per_sec": 0, 00:38:12.772 "w_mbytes_per_sec": 0 00:38:12.772 }, 00:38:12.772 "claimed": false, 00:38:12.772 "zoned": false, 00:38:12.772 "supported_io_types": { 00:38:12.772 "read": true, 00:38:12.772 "write": true, 00:38:12.772 "unmap": true, 00:38:12.772 "flush": true, 00:38:12.772 "reset": true, 00:38:12.772 "nvme_admin": true, 00:38:12.772 "nvme_io": true, 00:38:12.772 "nvme_io_md": false, 00:38:12.772 "write_zeroes": true, 00:38:12.772 "zcopy": false, 00:38:12.772 "get_zone_info": false, 00:38:12.772 "zone_management": false, 00:38:12.772 "zone_append": false, 00:38:12.772 "compare": true, 00:38:12.772 "compare_and_write": true, 00:38:12.772 "abort": true, 00:38:12.772 "seek_hole": false, 00:38:12.772 "seek_data": false, 00:38:12.772 "copy": true, 00:38:12.772 "nvme_iov_md": false 00:38:12.772 }, 00:38:12.772 "memory_domains": [ 00:38:12.772 { 00:38:12.772 "dma_device_id": "system", 00:38:12.772 "dma_device_type": 1 00:38:12.772 } 00:38:12.772 ], 00:38:12.772 "driver_specific": { 00:38:12.772 "nvme": [ 00:38:12.772 { 00:38:12.772 "trid": { 00:38:12.772 "trtype": "TCP", 00:38:12.772 "adrfam": "IPv4", 00:38:12.772 "traddr": "10.0.0.2", 00:38:12.772 "trsvcid": "4420", 00:38:12.772 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:12.772 }, 00:38:12.772 "ctrlr_data": { 00:38:12.772 "cntlid": 1, 00:38:12.772 "vendor_id": "0x8086", 00:38:12.772 "model_number": "SPDK bdev Controller", 00:38:12.772 "serial_number": "SPDK0", 00:38:12.772 "firmware_revision": "24.09.1", 00:38:12.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.772 "oacs": { 00:38:12.772 "security": 0, 00:38:12.772 "format": 0, 00:38:12.772 "firmware": 0, 00:38:12.772 "ns_manage": 0 00:38:12.772 }, 00:38:12.772 "multi_ctrlr": true, 00:38:12.772 "ana_reporting": false 00:38:12.772 }, 00:38:12.772 "vs": { 00:38:12.772 "nvme_version": "1.3" 00:38:12.772 }, 00:38:12.772 "ns_data": { 00:38:12.772 "id": 1, 00:38:12.772 "can_share": true 00:38:12.772 } 00:38:12.772 } 00:38:12.772 ], 00:38:12.772 "mp_policy": "active_passive" 00:38:12.772 } 00:38:12.772 } 00:38:12.772 ] 00:38:12.772 14:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1560437 00:38:12.772 14:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:12.772 14:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:12.772 Running I/O for 10 seconds... 00:38:13.708 Latency(us) 00:38:13.708 [2024-11-02T13:54:05.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.708 Nvme0n1 : 1.00 13692.00 53.48 0.00 0.00 0.00 0.00 0.00 00:38:13.708 [2024-11-02T13:54:05.763Z] =================================================================================================================== 00:38:13.708 [2024-11-02T13:54:05.763Z] Total : 13692.00 53.48 0.00 0.00 0.00 0.00 0.00 00:38:13.708 00:38:14.643 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:14.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.902 Nvme0n1 : 2.00 14297.00 55.85 0.00 0.00 0.00 0.00 0.00 00:38:14.902 [2024-11-02T13:54:06.957Z] =================================================================================================================== 00:38:14.902 [2024-11-02T13:54:06.957Z] Total : 14297.00 55.85 0.00 0.00 0.00 0.00 0.00 00:38:14.902 00:38:15.160 true 00:38:15.160 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:15.160 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:15.419 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:15.419 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:15.419 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1560437 00:38:15.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.986 Nvme0n1 : 3.00 14349.67 56.05 0.00 0.00 0.00 0.00 0.00 00:38:15.986 [2024-11-02T13:54:08.041Z] =================================================================================================================== 00:38:15.986 [2024-11-02T13:54:08.041Z] Total : 14349.67 56.05 0.00 0.00 0.00 0.00 0.00 00:38:15.986 00:38:16.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.922 Nvme0n1 : 4.00 14301.25 55.86 0.00 0.00 0.00 0.00 0.00 00:38:16.922 [2024-11-02T13:54:08.977Z] =================================================================================================================== 00:38:16.922 [2024-11-02T13:54:08.977Z] Total : 14301.25 55.86 0.00 0.00 0.00 0.00 0.00 00:38:16.922 00:38:17.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.856 Nvme0n1 : 5.00 14294.00 55.84 0.00 0.00 0.00 0.00 0.00 00:38:17.856 [2024-11-02T13:54:09.911Z] =================================================================================================================== 00:38:17.856 [2024-11-02T13:54:09.911Z] Total : 14294.00 55.84 0.00 0.00 0.00 0.00 0.00 00:38:17.856 00:38:18.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.792 Nvme0n1 : 6.00 14278.33 55.77 0.00 0.00 0.00 0.00 0.00 00:38:18.792 [2024-11-02T13:54:10.847Z] =================================================================================================================== 00:38:18.792 [2024-11-02T13:54:10.847Z] Total : 14278.33 55.77 0.00 0.00 0.00 0.00 0.00 00:38:18.792 00:38:19.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.727 Nvme0n1 : 7.00 14303.14 55.87 0.00 0.00 0.00 0.00 0.00 00:38:19.727 [2024-11-02T13:54:11.782Z] =================================================================================================================== 00:38:19.727 [2024-11-02T13:54:11.782Z] Total : 14303.14 55.87 0.00 0.00 0.00 0.00 0.00 00:38:19.727 00:38:21.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.102 Nvme0n1 : 8.00 14330.25 55.98 0.00 0.00 0.00 0.00 0.00 00:38:21.102 [2024-11-02T13:54:13.157Z] =================================================================================================================== 00:38:21.102 [2024-11-02T13:54:13.157Z] Total : 14330.25 55.98 0.00 0.00 0.00 0.00 0.00 00:38:21.102 00:38:22.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.038 Nvme0n1 : 9.00 14435.67 56.39 0.00 0.00 0.00 0.00 0.00 00:38:22.038 [2024-11-02T13:54:14.093Z] =================================================================================================================== 00:38:22.038 [2024-11-02T13:54:14.093Z] Total : 14435.67 56.39 0.00 0.00 0.00 0.00 0.00 00:38:22.038 00:38:22.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.970 Nvme0n1 : 10.00 14456.50 56.47 0.00 0.00 0.00 0.00 0.00 00:38:22.970 [2024-11-02T13:54:15.025Z] =================================================================================================================== 00:38:22.970 [2024-11-02T13:54:15.025Z] Total : 14456.50 56.47 0.00 0.00 0.00 0.00 0.00 00:38:22.970 00:38:22.970 00:38:22.970 Latency(us) 00:38:22.970 [2024-11-02T13:54:15.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.970 Nvme0n1 : 10.01 14458.49 56.48 0.00 0.00 8848.28 4708.88 20583.16 00:38:22.970 [2024-11-02T13:54:15.025Z] =================================================================================================================== 00:38:22.970 [2024-11-02T13:54:15.025Z] Total : 14458.49 56.48 0.00 0.00 8848.28 4708.88 20583.16 00:38:22.970 { 00:38:22.970 "results": [ 00:38:22.970 { 00:38:22.970 "job": "Nvme0n1", 00:38:22.970 "core_mask": "0x2", 00:38:22.970 "workload": "randwrite", 00:38:22.970 "status": "finished", 00:38:22.970 "queue_depth": 128, 00:38:22.970 "io_size": 4096, 00:38:22.970 "runtime": 10.007474, 00:38:22.970 "iops": 14458.493721792333, 00:38:22.970 "mibps": 56.4784911007513, 00:38:22.970 "io_failed": 0, 00:38:22.970 "io_timeout": 0, 00:38:22.970 "avg_latency_us": 8848.284483269943, 00:38:22.970 "min_latency_us": 4708.882962962963, 00:38:22.970 "max_latency_us": 20583.158518518518 00:38:22.970 } 00:38:22.970 ], 00:38:22.970 "core_count": 1 00:38:22.970 } 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1560304 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1560304 ']' 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1560304 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1560304 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1560304' 00:38:22.970 killing process with pid 1560304 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1560304 00:38:22.970 Received shutdown signal, test time was about 10.000000 seconds 00:38:22.970 00:38:22.970 Latency(us) 00:38:22.970 [2024-11-02T13:54:15.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.970 [2024-11-02T13:54:15.025Z] =================================================================================================================== 00:38:22.970 [2024-11-02T13:54:15.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.970 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1560304 00:38:23.228 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:23.486 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:23.745 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:23.745 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1557653 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1557653 00:38:24.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1557653 Killed "${NVMF_APP[@]}" "$@" 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=1562255 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 1562255 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1562255 ']' 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:24.003 14:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.003 [2024-11-02 14:54:15.990392] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:24.003 [2024-11-02 14:54:15.991429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:24.003 [2024-11-02 14:54:15.991485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.262 [2024-11-02 14:54:16.058963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.262 [2024-11-02 14:54:16.143120] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.262 [2024-11-02 14:54:16.143176] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.262 [2024-11-02 14:54:16.143189] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.262 [2024-11-02 14:54:16.143200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.262 [2024-11-02 14:54:16.143210] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.262 [2024-11-02 14:54:16.143238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.262 [2024-11-02 14:54:16.226254] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:24.262 [2024-11-02 14:54:16.226621] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.262 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:24.519 [2024-11-02 14:54:16.526337] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:24.519 [2024-11-02 14:54:16.526483] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:24.519 [2024-11-02 14:54:16.526544] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c87f1518-6698-457a-8541-b678b51f2b54 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c87f1518-6698-457a-8541-b678b51f2b54 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:24.519 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:24.776 14:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c87f1518-6698-457a-8541-b678b51f2b54 -t 2000 00:38:25.034 [ 00:38:25.034 { 00:38:25.034 "name": "c87f1518-6698-457a-8541-b678b51f2b54", 00:38:25.034 "aliases": [ 00:38:25.034 "lvs/lvol" 00:38:25.034 ], 00:38:25.034 "product_name": "Logical Volume", 00:38:25.034 "block_size": 4096, 00:38:25.034 "num_blocks": 38912, 00:38:25.034 "uuid": "c87f1518-6698-457a-8541-b678b51f2b54", 00:38:25.034 "assigned_rate_limits": { 00:38:25.034 "rw_ios_per_sec": 0, 00:38:25.034 "rw_mbytes_per_sec": 0, 00:38:25.034 "r_mbytes_per_sec": 0, 00:38:25.034 "w_mbytes_per_sec": 0 00:38:25.034 }, 00:38:25.034 "claimed": false, 00:38:25.034 "zoned": false, 00:38:25.034 "supported_io_types": { 00:38:25.034 "read": true, 00:38:25.034 "write": true, 00:38:25.034 "unmap": true, 00:38:25.034 "flush": false, 00:38:25.034 "reset": true, 00:38:25.034 "nvme_admin": false, 00:38:25.035 "nvme_io": false, 00:38:25.035 "nvme_io_md": false, 00:38:25.035 "write_zeroes": true, 00:38:25.035 "zcopy": false, 00:38:25.035 "get_zone_info": false, 00:38:25.035 "zone_management": false, 00:38:25.035 "zone_append": false, 00:38:25.035 "compare": false, 00:38:25.035 "compare_and_write": false, 00:38:25.035 "abort": false, 00:38:25.035 "seek_hole": true, 00:38:25.035 "seek_data": true, 00:38:25.035 "copy": false, 00:38:25.035 "nvme_iov_md": false 00:38:25.035 }, 00:38:25.035 "driver_specific": { 00:38:25.035 "lvol": { 00:38:25.035 "lvol_store_uuid": "179d1648-0b57-4d6a-861a-3c0faea1bcf3", 00:38:25.035 "base_bdev": "aio_bdev", 00:38:25.035 "thin_provision": false, 00:38:25.035 "num_allocated_clusters": 38, 00:38:25.035 "snapshot": false, 00:38:25.035 "clone": false, 00:38:25.035 "esnap_clone": false 00:38:25.035 } 00:38:25.035 } 00:38:25.035 } 00:38:25.035 ] 00:38:25.293 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:25.293 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:25.293 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:25.551 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:25.551 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:25.551 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:25.809 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:25.809 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:26.067 [2024-11-02 14:54:17.903766] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:26.067 14:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:26.325 request: 00:38:26.325 { 00:38:26.325 "uuid": "179d1648-0b57-4d6a-861a-3c0faea1bcf3", 00:38:26.325 "method": "bdev_lvol_get_lvstores", 00:38:26.325 "req_id": 1 00:38:26.325 } 00:38:26.325 Got JSON-RPC error response 00:38:26.325 response: 00:38:26.325 { 00:38:26.325 "code": -19, 00:38:26.325 "message": "No such device" 00:38:26.325 } 00:38:26.325 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:26.325 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:26.325 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:26.325 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:26.325 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:26.583 aio_bdev 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c87f1518-6698-457a-8541-b678b51f2b54 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c87f1518-6698-457a-8541-b678b51f2b54 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:26.583 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:26.841 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c87f1518-6698-457a-8541-b678b51f2b54 -t 2000 00:38:27.099 [ 00:38:27.099 { 00:38:27.099 "name": "c87f1518-6698-457a-8541-b678b51f2b54", 00:38:27.099 "aliases": [ 00:38:27.099 "lvs/lvol" 00:38:27.099 ], 00:38:27.099 "product_name": "Logical Volume", 00:38:27.099 "block_size": 4096, 00:38:27.099 "num_blocks": 38912, 00:38:27.099 "uuid": "c87f1518-6698-457a-8541-b678b51f2b54", 00:38:27.099 "assigned_rate_limits": { 00:38:27.099 "rw_ios_per_sec": 0, 00:38:27.099 "rw_mbytes_per_sec": 0, 00:38:27.099 "r_mbytes_per_sec": 0, 00:38:27.099 "w_mbytes_per_sec": 0 00:38:27.099 }, 00:38:27.099 "claimed": false, 00:38:27.099 "zoned": false, 00:38:27.099 "supported_io_types": { 00:38:27.099 "read": true, 00:38:27.099 "write": true, 00:38:27.099 "unmap": true, 00:38:27.099 "flush": false, 00:38:27.099 "reset": true, 00:38:27.099 "nvme_admin": false, 00:38:27.099 "nvme_io": false, 00:38:27.099 "nvme_io_md": false, 00:38:27.099 "write_zeroes": true, 00:38:27.099 "zcopy": false, 00:38:27.099 "get_zone_info": false, 00:38:27.099 "zone_management": false, 00:38:27.099 "zone_append": false, 00:38:27.099 "compare": false, 00:38:27.099 "compare_and_write": false, 00:38:27.099 "abort": false, 00:38:27.099 "seek_hole": true, 00:38:27.099 "seek_data": true, 00:38:27.099 "copy": false, 00:38:27.099 "nvme_iov_md": false 00:38:27.099 }, 00:38:27.099 "driver_specific": { 00:38:27.099 "lvol": { 00:38:27.099 "lvol_store_uuid": "179d1648-0b57-4d6a-861a-3c0faea1bcf3", 00:38:27.099 "base_bdev": "aio_bdev", 00:38:27.099 "thin_provision": false, 00:38:27.099 "num_allocated_clusters": 38, 00:38:27.099 "snapshot": false, 00:38:27.099 "clone": false, 00:38:27.099 "esnap_clone": false 00:38:27.099 } 00:38:27.099 } 00:38:27.099 } 00:38:27.099 ] 00:38:27.099 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:27.099 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:27.099 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:27.357 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:27.358 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:27.358 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:27.616 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:27.616 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c87f1518-6698-457a-8541-b678b51f2b54 00:38:27.873 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 179d1648-0b57-4d6a-861a-3c0faea1bcf3 00:38:28.130 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:28.388 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:28.647 00:38:28.647 real 0m19.535s 00:38:28.647 user 0m36.537s 00:38:28.647 sys 0m4.726s 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:28.647 ************************************ 00:38:28.647 END TEST lvs_grow_dirty 00:38:28.647 ************************************ 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:28.647 nvmf_trace.0 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.647 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.648 rmmod nvme_tcp 00:38:28.648 rmmod nvme_fabrics 00:38:28.648 rmmod nvme_keyring 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 1562255 ']' 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 1562255 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1562255 ']' 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1562255 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1562255 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1562255' 00:38:28.648 killing process with pid 1562255 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1562255 00:38:28.648 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1562255 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.906 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:31.439 00:38:31.439 real 0m43.009s 00:38:31.439 user 0m55.865s 00:38:31.439 sys 0m8.532s 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:31.439 ************************************ 00:38:31.439 END TEST nvmf_lvs_grow 00:38:31.439 ************************************ 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:31.439 ************************************ 00:38:31.439 START TEST nvmf_bdev_io_wait 00:38:31.439 ************************************ 00:38:31.439 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:31.439 * Looking for test storage... 00:38:31.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.439 --rc genhtml_branch_coverage=1 00:38:31.439 --rc genhtml_function_coverage=1 00:38:31.439 --rc genhtml_legend=1 00:38:31.439 --rc geninfo_all_blocks=1 00:38:31.439 --rc geninfo_unexecuted_blocks=1 00:38:31.439 00:38:31.439 ' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.439 --rc genhtml_branch_coverage=1 00:38:31.439 --rc genhtml_function_coverage=1 00:38:31.439 --rc genhtml_legend=1 00:38:31.439 --rc geninfo_all_blocks=1 00:38:31.439 --rc geninfo_unexecuted_blocks=1 00:38:31.439 00:38:31.439 ' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.439 --rc genhtml_branch_coverage=1 00:38:31.439 --rc genhtml_function_coverage=1 00:38:31.439 --rc genhtml_legend=1 00:38:31.439 --rc geninfo_all_blocks=1 00:38:31.439 --rc geninfo_unexecuted_blocks=1 00:38:31.439 00:38:31.439 ' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.439 --rc genhtml_branch_coverage=1 00:38:31.439 --rc genhtml_function_coverage=1 00:38:31.439 --rc genhtml_legend=1 00:38:31.439 --rc geninfo_all_blocks=1 00:38:31.439 --rc geninfo_unexecuted_blocks=1 00:38:31.439 00:38:31.439 ' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:31.439 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:31.440 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:33.343 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:33.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:33.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:33.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:33.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:33.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:38:33.344 00:38:33.344 --- 10.0.0.2 ping statistics --- 00:38:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.344 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:38:33.344 00:38:33.344 --- 10.0.0.1 ping statistics --- 00:38:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.344 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=1564776 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 1564776 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1564776 ']' 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.344 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.345 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.345 [2024-11-02 14:54:25.323900] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.345 [2024-11-02 14:54:25.325162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:33.345 [2024-11-02 14:54:25.325224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.603 [2024-11-02 14:54:25.397847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:33.603 [2024-11-02 14:54:25.492132] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.603 [2024-11-02 14:54:25.492178] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.603 [2024-11-02 14:54:25.492207] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.603 [2024-11-02 14:54:25.492222] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.603 [2024-11-02 14:54:25.492234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.603 [2024-11-02 14:54:25.492331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.603 [2024-11-02 14:54:25.492356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.603 [2024-11-02 14:54:25.492405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:33.603 [2024-11-02 14:54:25.492408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.603 [2024-11-02 14:54:25.492904] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.603 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.603 [2024-11-02 14:54:25.634250] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:33.603 [2024-11-02 14:54:25.634504] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:33.603 [2024-11-02 14:54:25.635382] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:33.603 [2024-11-02 14:54:25.636170] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.604 [2024-11-02 14:54:25.641153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.604 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.863 Malloc0 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.863 [2024-11-02 14:54:25.705278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1564810 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1564812 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:33.863 { 00:38:33.863 "params": { 00:38:33.863 "name": "Nvme$subsystem", 00:38:33.863 "trtype": "$TEST_TRANSPORT", 00:38:33.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.863 "adrfam": "ipv4", 00:38:33.863 "trsvcid": "$NVMF_PORT", 00:38:33.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.863 "hdgst": ${hdgst:-false}, 00:38:33.863 "ddgst": ${ddgst:-false} 00:38:33.863 }, 00:38:33.863 "method": "bdev_nvme_attach_controller" 00:38:33.863 } 00:38:33.863 EOF 00:38:33.863 )") 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1564814 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:33.863 { 00:38:33.863 "params": { 00:38:33.863 "name": "Nvme$subsystem", 00:38:33.863 "trtype": "$TEST_TRANSPORT", 00:38:33.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.863 "adrfam": "ipv4", 00:38:33.863 "trsvcid": "$NVMF_PORT", 00:38:33.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.863 "hdgst": ${hdgst:-false}, 00:38:33.863 "ddgst": ${ddgst:-false} 00:38:33.863 }, 00:38:33.863 "method": "bdev_nvme_attach_controller" 00:38:33.863 } 00:38:33.863 EOF 00:38:33.863 )") 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1564817 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:33.863 { 00:38:33.863 "params": { 00:38:33.863 "name": "Nvme$subsystem", 00:38:33.863 "trtype": "$TEST_TRANSPORT", 00:38:33.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.863 "adrfam": "ipv4", 00:38:33.863 "trsvcid": "$NVMF_PORT", 00:38:33.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.863 "hdgst": ${hdgst:-false}, 00:38:33.863 "ddgst": ${ddgst:-false} 00:38:33.863 }, 00:38:33.863 "method": "bdev_nvme_attach_controller" 00:38:33.863 } 00:38:33.863 EOF 00:38:33.863 )") 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:33.863 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:33.864 { 00:38:33.864 "params": { 00:38:33.864 "name": "Nvme$subsystem", 00:38:33.864 "trtype": "$TEST_TRANSPORT", 00:38:33.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.864 "adrfam": "ipv4", 00:38:33.864 "trsvcid": "$NVMF_PORT", 00:38:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.864 "hdgst": ${hdgst:-false}, 00:38:33.864 "ddgst": ${ddgst:-false} 00:38:33.864 }, 00:38:33.864 "method": "bdev_nvme_attach_controller" 00:38:33.864 } 00:38:33.864 EOF 00:38:33.864 )") 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1564810 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:33.864 "params": { 00:38:33.864 "name": "Nvme1", 00:38:33.864 "trtype": "tcp", 00:38:33.864 "traddr": "10.0.0.2", 00:38:33.864 "adrfam": "ipv4", 00:38:33.864 "trsvcid": "4420", 00:38:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:33.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:33.864 "hdgst": false, 00:38:33.864 "ddgst": false 00:38:33.864 }, 00:38:33.864 "method": "bdev_nvme_attach_controller" 00:38:33.864 }' 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:33.864 "params": { 00:38:33.864 "name": "Nvme1", 00:38:33.864 "trtype": "tcp", 00:38:33.864 "traddr": "10.0.0.2", 00:38:33.864 "adrfam": "ipv4", 00:38:33.864 "trsvcid": "4420", 00:38:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:33.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:33.864 "hdgst": false, 00:38:33.864 "ddgst": false 00:38:33.864 }, 00:38:33.864 "method": "bdev_nvme_attach_controller" 00:38:33.864 }' 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:33.864 "params": { 00:38:33.864 "name": "Nvme1", 00:38:33.864 "trtype": "tcp", 00:38:33.864 "traddr": "10.0.0.2", 00:38:33.864 "adrfam": "ipv4", 00:38:33.864 "trsvcid": "4420", 00:38:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:33.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:33.864 "hdgst": false, 00:38:33.864 "ddgst": false 00:38:33.864 }, 00:38:33.864 "method": "bdev_nvme_attach_controller" 00:38:33.864 }' 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:33.864 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:33.864 "params": { 00:38:33.864 "name": "Nvme1", 00:38:33.864 "trtype": "tcp", 00:38:33.864 "traddr": "10.0.0.2", 00:38:33.864 "adrfam": "ipv4", 00:38:33.864 "trsvcid": "4420", 00:38:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:33.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:33.864 "hdgst": false, 00:38:33.864 "ddgst": false 00:38:33.864 }, 00:38:33.864 "method": "bdev_nvme_attach_controller" 00:38:33.864 }' 00:38:33.864 [2024-11-02 14:54:25.754629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:33.864 [2024-11-02 14:54:25.754628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:33.864 [2024-11-02 14:54:25.754727] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 14:54:25.754727] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:33.864 --proc-type=auto ] 00:38:33.864 [2024-11-02 14:54:25.756004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:33.864 [2024-11-02 14:54:25.756002] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:33.864 [2024-11-02 14:54:25.756096] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 14:54:25.756096] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:33.864 --proc-type=auto ] 00:38:34.123 [2024-11-02 14:54:25.936664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.123 [2024-11-02 14:54:26.012410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:34.123 [2024-11-02 14:54:26.037105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.123 [2024-11-02 14:54:26.111176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.123 [2024-11-02 14:54:26.116712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:34.381 [2024-11-02 14:54:26.181318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.381 [2024-11-02 14:54:26.186582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.381 [2024-11-02 14:54:26.253677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:34.381 Running I/O for 1 seconds... 00:38:34.639 Running I/O for 1 seconds... 00:38:34.639 Running I/O for 1 seconds... 00:38:34.898 Running I/O for 1 seconds... 00:38:35.464 6901.00 IOPS, 26.96 MiB/s [2024-11-02T13:54:27.519Z] 8200.00 IOPS, 32.03 MiB/s 00:38:35.464 Latency(us) 00:38:35.464 [2024-11-02T13:54:27.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.464 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:35.464 Nvme1n1 : 1.02 6903.02 26.96 0.00 0.00 18421.63 2184.53 30292.20 00:38:35.464 [2024-11-02T13:54:27.519Z] =================================================================================================================== 00:38:35.464 [2024-11-02T13:54:27.519Z] Total : 6903.02 26.96 0.00 0.00 18421.63 2184.53 30292.20 00:38:35.464 00:38:35.464 Latency(us) 00:38:35.464 [2024-11-02T13:54:27.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.464 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:35.464 Nvme1n1 : 1.01 8244.91 32.21 0.00 0.00 15440.56 4903.06 20000.62 00:38:35.464 [2024-11-02T13:54:27.519Z] =================================================================================================================== 00:38:35.464 [2024-11-02T13:54:27.519Z] Total : 8244.91 32.21 0.00 0.00 15440.56 4903.06 20000.62 00:38:35.722 7399.00 IOPS, 28.90 MiB/s 00:38:35.722 Latency(us) 00:38:35.722 [2024-11-02T13:54:27.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.722 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:35.722 Nvme1n1 : 1.01 7501.76 29.30 0.00 0.00 17013.31 3713.71 40001.23 00:38:35.722 [2024-11-02T13:54:27.777Z] =================================================================================================================== 00:38:35.722 [2024-11-02T13:54:27.777Z] Total : 7501.76 29.30 0.00 0.00 17013.31 3713.71 40001.23 00:38:35.723 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1564812 00:38:35.723 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1564814 00:38:35.981 184144.00 IOPS, 719.31 MiB/s 00:38:35.981 Latency(us) 00:38:35.981 [2024-11-02T13:54:28.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.981 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:35.981 Nvme1n1 : 1.00 183794.70 717.95 0.00 0.00 692.69 310.99 1868.99 00:38:35.981 [2024-11-02T13:54:28.036Z] =================================================================================================================== 00:38:35.981 [2024-11-02T13:54:28.036Z] Total : 183794.70 717.95 0.00 0.00 692.69 310.99 1868.99 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1564817 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.239 rmmod nvme_tcp 00:38:36.239 rmmod nvme_fabrics 00:38:36.239 rmmod nvme_keyring 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 1564776 ']' 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 1564776 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1564776 ']' 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1564776 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564776 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:36.239 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564776' 00:38:36.239 killing process with pid 1564776 00:38:36.240 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1564776 00:38:36.240 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1564776 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.499 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:39.042 00:38:39.042 real 0m7.543s 00:38:39.042 user 0m15.793s 00:38:39.042 sys 0m4.278s 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:39.042 ************************************ 00:38:39.042 END TEST nvmf_bdev_io_wait 00:38:39.042 ************************************ 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:39.042 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:39.042 ************************************ 00:38:39.042 START TEST nvmf_queue_depth 00:38:39.042 ************************************ 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:39.043 * Looking for test storage... 00:38:39.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:39.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.043 --rc genhtml_branch_coverage=1 00:38:39.043 --rc genhtml_function_coverage=1 00:38:39.043 --rc genhtml_legend=1 00:38:39.043 --rc geninfo_all_blocks=1 00:38:39.043 --rc geninfo_unexecuted_blocks=1 00:38:39.043 00:38:39.043 ' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:39.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.043 --rc genhtml_branch_coverage=1 00:38:39.043 --rc genhtml_function_coverage=1 00:38:39.043 --rc genhtml_legend=1 00:38:39.043 --rc geninfo_all_blocks=1 00:38:39.043 --rc geninfo_unexecuted_blocks=1 00:38:39.043 00:38:39.043 ' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:39.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.043 --rc genhtml_branch_coverage=1 00:38:39.043 --rc genhtml_function_coverage=1 00:38:39.043 --rc genhtml_legend=1 00:38:39.043 --rc geninfo_all_blocks=1 00:38:39.043 --rc geninfo_unexecuted_blocks=1 00:38:39.043 00:38:39.043 ' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:39.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.043 --rc genhtml_branch_coverage=1 00:38:39.043 --rc genhtml_function_coverage=1 00:38:39.043 --rc genhtml_legend=1 00:38:39.043 --rc geninfo_all_blocks=1 00:38:39.043 --rc geninfo_unexecuted_blocks=1 00:38:39.043 00:38:39.043 ' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:39.043 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:39.044 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:41.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:41.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:41.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:41.013 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:41.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:38:41.014 00:38:41.014 --- 10.0.0.2 ping statistics --- 00:38:41.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.014 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:38:41.014 00:38:41.014 --- 10.0.0.1 ping statistics --- 00:38:41.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.014 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=1567155 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 1567155 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1567155 ']' 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:41.014 14:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.014 [2024-11-02 14:54:33.003003] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:41.014 [2024-11-02 14:54:33.004100] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:41.014 [2024-11-02 14:54:33.004177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.273 [2024-11-02 14:54:33.080350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.273 [2024-11-02 14:54:33.169564] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.273 [2024-11-02 14:54:33.169629] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.273 [2024-11-02 14:54:33.169654] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.273 [2024-11-02 14:54:33.169668] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.273 [2024-11-02 14:54:33.169689] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.273 [2024-11-02 14:54:33.169733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.273 [2024-11-02 14:54:33.264486] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:41.273 [2024-11-02 14:54:33.264859] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.273 [2024-11-02 14:54:33.322411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.273 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.531 Malloc0 00:38:41.531 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.531 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:41.531 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.532 [2024-11-02 14:54:33.386490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1567177 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1567177 /var/tmp/bdevperf.sock 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1567177 ']' 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:41.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:41.532 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.532 [2024-11-02 14:54:33.436195] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:41.532 [2024-11-02 14:54:33.436306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567177 ] 00:38:41.532 [2024-11-02 14:54:33.499015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.790 [2024-11-02 14:54:33.591023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.790 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:41.790 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:41.790 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:41.790 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.790 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:42.048 NVMe0n1 00:38:42.048 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.048 14:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:42.048 Running I/O for 10 seconds... 00:38:44.355 8008.00 IOPS, 31.28 MiB/s [2024-11-02T13:54:37.344Z] 8138.50 IOPS, 31.79 MiB/s [2024-11-02T13:54:38.278Z] 8188.00 IOPS, 31.98 MiB/s [2024-11-02T13:54:39.212Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-02T13:54:40.145Z] 8275.00 IOPS, 32.32 MiB/s [2024-11-02T13:54:41.079Z] 8342.67 IOPS, 32.59 MiB/s [2024-11-02T13:54:42.453Z] 8337.57 IOPS, 32.57 MiB/s [2024-11-02T13:54:43.387Z] 8321.38 IOPS, 32.51 MiB/s [2024-11-02T13:54:44.321Z] 8322.56 IOPS, 32.51 MiB/s [2024-11-02T13:54:44.321Z] 8339.20 IOPS, 32.58 MiB/s 00:38:52.266 Latency(us) 00:38:52.266 [2024-11-02T13:54:44.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.266 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:52.266 Verification LBA range: start 0x0 length 0x4000 00:38:52.266 NVMe0n1 : 10.08 8369.66 32.69 0.00 0.00 121699.03 21456.97 75730.49 00:38:52.266 [2024-11-02T13:54:44.321Z] =================================================================================================================== 00:38:52.266 [2024-11-02T13:54:44.321Z] Total : 8369.66 32.69 0.00 0.00 121699.03 21456.97 75730.49 00:38:52.266 { 00:38:52.266 "results": [ 00:38:52.266 { 00:38:52.266 "job": "NVMe0n1", 00:38:52.266 "core_mask": "0x1", 00:38:52.266 "workload": "verify", 00:38:52.266 "status": "finished", 00:38:52.266 "verify_range": { 00:38:52.266 "start": 0, 00:38:52.266 "length": 16384 00:38:52.266 }, 00:38:52.266 "queue_depth": 1024, 00:38:52.266 "io_size": 4096, 00:38:52.266 "runtime": 10.082487, 00:38:52.266 "iops": 8369.661175858695, 00:38:52.266 "mibps": 32.69398896819803, 00:38:52.266 "io_failed": 0, 00:38:52.266 "io_timeout": 0, 00:38:52.266 "avg_latency_us": 121699.03253534313, 00:38:52.266 "min_latency_us": 21456.971851851853, 00:38:52.266 "max_latency_us": 75730.48888888888 00:38:52.266 } 00:38:52.266 ], 00:38:52.266 "core_count": 1 00:38:52.266 } 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1567177 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1567177 ']' 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1567177 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1567177 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1567177' 00:38:52.266 killing process with pid 1567177 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1567177 00:38:52.266 Received shutdown signal, test time was about 10.000000 seconds 00:38:52.266 00:38:52.266 Latency(us) 00:38:52.266 [2024-11-02T13:54:44.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.266 [2024-11-02T13:54:44.321Z] =================================================================================================================== 00:38:52.266 [2024-11-02T13:54:44.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:52.266 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1567177 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:52.525 rmmod nvme_tcp 00:38:52.525 rmmod nvme_fabrics 00:38:52.525 rmmod nvme_keyring 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 1567155 ']' 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 1567155 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1567155 ']' 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1567155 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1567155 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1567155' 00:38:52.525 killing process with pid 1567155 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1567155 00:38:52.525 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1567155 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.785 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.316 00:38:55.316 real 0m16.263s 00:38:55.316 user 0m22.402s 00:38:55.316 sys 0m3.407s 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.316 ************************************ 00:38:55.316 END TEST nvmf_queue_depth 00:38:55.316 ************************************ 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.316 ************************************ 00:38:55.316 START TEST nvmf_target_multipath 00:38:55.316 ************************************ 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:55.316 * Looking for test storage... 00:38:55.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.316 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:55.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.316 --rc genhtml_branch_coverage=1 00:38:55.316 --rc genhtml_function_coverage=1 00:38:55.316 --rc genhtml_legend=1 00:38:55.316 --rc geninfo_all_blocks=1 00:38:55.316 --rc geninfo_unexecuted_blocks=1 00:38:55.316 00:38:55.316 ' 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:55.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.316 --rc genhtml_branch_coverage=1 00:38:55.316 --rc genhtml_function_coverage=1 00:38:55.316 --rc genhtml_legend=1 00:38:55.316 --rc geninfo_all_blocks=1 00:38:55.316 --rc geninfo_unexecuted_blocks=1 00:38:55.316 00:38:55.316 ' 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:55.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.316 --rc genhtml_branch_coverage=1 00:38:55.316 --rc genhtml_function_coverage=1 00:38:55.316 --rc genhtml_legend=1 00:38:55.316 --rc geninfo_all_blocks=1 00:38:55.316 --rc geninfo_unexecuted_blocks=1 00:38:55.316 00:38:55.316 ' 00:38:55.316 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:55.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.316 --rc genhtml_branch_coverage=1 00:38:55.316 --rc genhtml_function_coverage=1 00:38:55.316 --rc genhtml_legend=1 00:38:55.317 --rc geninfo_all_blocks=1 00:38:55.317 --rc geninfo_unexecuted_blocks=1 00:38:55.317 00:38:55.317 ' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.317 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:57.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:57.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:57.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:57.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.234 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:38:57.235 00:38:57.235 --- 10.0.0.2 ping statistics --- 00:38:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.235 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:38:57.235 00:38:57.235 --- 10.0.0.1 ping statistics --- 00:38:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.235 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:57.235 only one NIC for nvmf test 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:57.235 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:57.235 rmmod nvme_tcp 00:38:57.494 rmmod nvme_fabrics 00:38:57.494 rmmod nvme_keyring 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:57.494 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.401 00:38:59.401 real 0m4.554s 00:38:59.401 user 0m0.928s 00:38:59.401 sys 0m1.557s 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:59.401 ************************************ 00:38:59.401 END TEST nvmf_target_multipath 00:38:59.401 ************************************ 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:59.401 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:59.401 ************************************ 00:38:59.401 START TEST nvmf_zcopy 00:38:59.401 ************************************ 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:59.660 * Looking for test storage... 00:38:59.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:59.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.660 --rc genhtml_branch_coverage=1 00:38:59.660 --rc genhtml_function_coverage=1 00:38:59.660 --rc genhtml_legend=1 00:38:59.660 --rc geninfo_all_blocks=1 00:38:59.660 --rc geninfo_unexecuted_blocks=1 00:38:59.660 00:38:59.660 ' 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:59.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.660 --rc genhtml_branch_coverage=1 00:38:59.660 --rc genhtml_function_coverage=1 00:38:59.660 --rc genhtml_legend=1 00:38:59.660 --rc geninfo_all_blocks=1 00:38:59.660 --rc geninfo_unexecuted_blocks=1 00:38:59.660 00:38:59.660 ' 00:38:59.660 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:59.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.660 --rc genhtml_branch_coverage=1 00:38:59.660 --rc genhtml_function_coverage=1 00:38:59.661 --rc genhtml_legend=1 00:38:59.661 --rc geninfo_all_blocks=1 00:38:59.661 --rc geninfo_unexecuted_blocks=1 00:38:59.661 00:38:59.661 ' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:59.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.661 --rc genhtml_branch_coverage=1 00:38:59.661 --rc genhtml_function_coverage=1 00:38:59.661 --rc genhtml_legend=1 00:38:59.661 --rc geninfo_all_blocks=1 00:38:59.661 --rc geninfo_unexecuted_blocks=1 00:38:59.661 00:38:59.661 ' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:59.661 14:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:01.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:01.561 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:01.562 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:01.562 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:01.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.562 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:39:01.821 00:39:01.821 --- 10.0.0.2 ping statistics --- 00:39:01.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.821 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:39:01.821 00:39:01.821 --- 10.0.0.1 ping statistics --- 00:39:01.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.821 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=1572343 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 1572343 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1572343 ']' 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:01.821 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:01.821 [2024-11-02 14:54:53.771334] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.821 [2024-11-02 14:54:53.772399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:01.821 [2024-11-02 14:54:53.772473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.821 [2024-11-02 14:54:53.839627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.081 [2024-11-02 14:54:53.935549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.081 [2024-11-02 14:54:53.935626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.081 [2024-11-02 14:54:53.935642] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.081 [2024-11-02 14:54:53.935655] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.081 [2024-11-02 14:54:53.935666] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.081 [2024-11-02 14:54:53.935721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.081 [2024-11-02 14:54:54.033675] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:02.081 [2024-11-02 14:54:54.034028] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.081 [2024-11-02 14:54:54.088449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.081 [2024-11-02 14:54:54.104621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.081 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.340 malloc0 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:02.340 { 00:39:02.340 "params": { 00:39:02.340 "name": "Nvme$subsystem", 00:39:02.340 "trtype": "$TEST_TRANSPORT", 00:39:02.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.340 "adrfam": "ipv4", 00:39:02.340 "trsvcid": "$NVMF_PORT", 00:39:02.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.340 "hdgst": ${hdgst:-false}, 00:39:02.340 "ddgst": ${ddgst:-false} 00:39:02.340 }, 00:39:02.340 "method": "bdev_nvme_attach_controller" 00:39:02.340 } 00:39:02.340 EOF 00:39:02.340 )") 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:02.340 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:02.340 "params": { 00:39:02.340 "name": "Nvme1", 00:39:02.340 "trtype": "tcp", 00:39:02.340 "traddr": "10.0.0.2", 00:39:02.340 "adrfam": "ipv4", 00:39:02.340 "trsvcid": "4420", 00:39:02.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:02.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:02.340 "hdgst": false, 00:39:02.340 "ddgst": false 00:39:02.340 }, 00:39:02.340 "method": "bdev_nvme_attach_controller" 00:39:02.340 }' 00:39:02.340 [2024-11-02 14:54:54.212067] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:02.340 [2024-11-02 14:54:54.212153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572373 ] 00:39:02.340 [2024-11-02 14:54:54.274861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.340 [2024-11-02 14:54:54.369416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.907 Running I/O for 10 seconds... 00:39:04.774 5439.00 IOPS, 42.49 MiB/s [2024-11-02T13:54:57.763Z] 5437.00 IOPS, 42.48 MiB/s [2024-11-02T13:54:59.136Z] 5347.67 IOPS, 41.78 MiB/s [2024-11-02T13:55:00.070Z] 5309.25 IOPS, 41.48 MiB/s [2024-11-02T13:55:01.059Z] 5281.60 IOPS, 41.26 MiB/s [2024-11-02T13:55:01.994Z] 5270.83 IOPS, 41.18 MiB/s [2024-11-02T13:55:02.928Z] 5258.14 IOPS, 41.08 MiB/s [2024-11-02T13:55:03.862Z] 5297.62 IOPS, 41.39 MiB/s [2024-11-02T13:55:04.795Z] 5312.67 IOPS, 41.51 MiB/s [2024-11-02T13:55:04.795Z] 5319.00 IOPS, 41.55 MiB/s 00:39:12.740 Latency(us) 00:39:12.740 [2024-11-02T13:55:04.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.740 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:12.740 Verification LBA range: start 0x0 length 0x1000 00:39:12.740 Nvme1n1 : 10.01 5323.39 41.59 0.00 0.00 23978.98 555.24 32622.36 00:39:12.740 [2024-11-02T13:55:04.795Z] =================================================================================================================== 00:39:12.740 [2024-11-02T13:55:04.795Z] Total : 5323.39 41.59 0.00 0.00 23978.98 555.24 32622.36 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1573670 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:12.999 { 00:39:12.999 "params": { 00:39:12.999 "name": "Nvme$subsystem", 00:39:12.999 "trtype": "$TEST_TRANSPORT", 00:39:12.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.999 "adrfam": "ipv4", 00:39:12.999 "trsvcid": "$NVMF_PORT", 00:39:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.999 "hdgst": ${hdgst:-false}, 00:39:12.999 "ddgst": ${ddgst:-false} 00:39:12.999 }, 00:39:12.999 "method": "bdev_nvme_attach_controller" 00:39:12.999 } 00:39:12.999 EOF 00:39:12.999 )") 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:12.999 [2024-11-02 14:55:04.980356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:04.980401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:12.999 14:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:12.999 "params": { 00:39:12.999 "name": "Nvme1", 00:39:12.999 "trtype": "tcp", 00:39:12.999 "traddr": "10.0.0.2", 00:39:12.999 "adrfam": "ipv4", 00:39:12.999 "trsvcid": "4420", 00:39:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:12.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:12.999 "hdgst": false, 00:39:12.999 "ddgst": false 00:39:12.999 }, 00:39:12.999 "method": "bdev_nvme_attach_controller" 00:39:12.999 }' 00:39:12.999 [2024-11-02 14:55:04.988229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:04.988283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:04.996230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:04.996275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.004228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.004276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.012227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.012269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.020227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.020269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.021639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:12.999 [2024-11-02 14:55:05.021721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573670 ] 00:39:12.999 [2024-11-02 14:55:05.028226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.028269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.036226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.036270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.044228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.044269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.999 [2024-11-02 14:55:05.052248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.999 [2024-11-02 14:55:05.052275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.060246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.060275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.068247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.068284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.076230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.076275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.084229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.084272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.084691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.258 [2024-11-02 14:55:05.092327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.092365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.100302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.100352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.108248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.108277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.116245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.116274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.124231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.124274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.132251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.132282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.140312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.140353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.148267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.148293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.156232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.156287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.164246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.164273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.172234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.172277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.176982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.258 [2024-11-02 14:55:05.180230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.180272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.188231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.188285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.196311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.196350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.204304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.204344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.212301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.212345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.220312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.220357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.228310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.228354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.236304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.236342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.244270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.244298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.252300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.252340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.260309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.260351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.258 [2024-11-02 14:55:05.268312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.258 [2024-11-02 14:55:05.268355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.259 [2024-11-02 14:55:05.276236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.259 [2024-11-02 14:55:05.276279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.259 [2024-11-02 14:55:05.284233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.259 [2024-11-02 14:55:05.284276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.259 [2024-11-02 14:55:05.292265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.259 [2024-11-02 14:55:05.292293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.259 [2024-11-02 14:55:05.300308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.259 [2024-11-02 14:55:05.300334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.259 [2024-11-02 14:55:05.308270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.259 [2024-11-02 14:55:05.308300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.316269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.316295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.324354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.324379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.332250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.332281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.340246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.340278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.348253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.348288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.356247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.356279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 Running I/O for 5 seconds... 00:39:13.517 [2024-11-02 14:55:05.371186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.371214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.381781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.381810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.397469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.397499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.413578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.413607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.422761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.422789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.437683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.437711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.451192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.451221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.464773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.464801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.474650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.474692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.489403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.489445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.505979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.506007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.524300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.524338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.535324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.535350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.549554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.549586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.517 [2024-11-02 14:55:05.565687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.517 [2024-11-02 14:55:05.565718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.583089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.583120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.594114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.594145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.610277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.610319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.624568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.624595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.634812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.634842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.650929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.650960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.664657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.664687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.675079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.675109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.688323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.688351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.700138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.700171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.712645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.712675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.729899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.729929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.745665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.745696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.762041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.762072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.776938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.776968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.787634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.787674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.800973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.801003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.776 [2024-11-02 14:55:05.818790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.776 [2024-11-02 14:55:05.818820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.830914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.034 [2024-11-02 14:55:05.830944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.845660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.034 [2024-11-02 14:55:05.845691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.857506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.034 [2024-11-02 14:55:05.857532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.873351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.034 [2024-11-02 14:55:05.873380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.889701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.034 [2024-11-02 14:55:05.889732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.034 [2024-11-02 14:55:05.906464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.906491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.917324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.917352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.933497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.933525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.947599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.947643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.958594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.958625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.974523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.974551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.988135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.988169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:05.999032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:05.999063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.014631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.014680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.026712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.026742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.041146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.041176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.051858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.051898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.064993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.065024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.035 [2024-11-02 14:55:06.081881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.035 [2024-11-02 14:55:06.081911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.098829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.098861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.109159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.109190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.125394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.125421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.141659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.141690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.158728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.158757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.169485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.169513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.185422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.185450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.201935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.201966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.218073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.218102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.293 [2024-11-02 14:55:06.234052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.293 [2024-11-02 14:55:06.234083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.250181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.250211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.265584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.265615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.276998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.277028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.294504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.294531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.305897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.305926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.322042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.322071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.294 [2024-11-02 14:55:06.338485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.294 [2024-11-02 14:55:06.338520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.354603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.354633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 10620.00 IOPS, 82.97 MiB/s [2024-11-02T13:55:06.607Z] [2024-11-02 14:55:06.369088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.369119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.379791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.379821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.394623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.394654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.408667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.408696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.419353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.419379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.432252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.432305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.444503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.444544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.456044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.456073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.467966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.467997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.479802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.479831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.491832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.491863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.505423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.505450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.521457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.521484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.538447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.538475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.549287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.549330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.565473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.565500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.580905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.580935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.591531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.591575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.552 [2024-11-02 14:55:06.604219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.552 [2024-11-02 14:55:06.604249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.616722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.616752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.627409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.627436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.641091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.641121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.658380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.658406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.671839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.671869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.683291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.683332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.695823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.695853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.707998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.708028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.719749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.719778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.731818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.731848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.743993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.744023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.756695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.756725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.767864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.767895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.780876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.780907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.798150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.798181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.813248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.813290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.829817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.829846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.844762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.844792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.811 [2024-11-02 14:55:06.855200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.811 [2024-11-02 14:55:06.855230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.868644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.868674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.880180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.880210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.892649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.892678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.909475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.909502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.926036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.926066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.942216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.942247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.956892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.956922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.967076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.967107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.980408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.980434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:06.992527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:06.992552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.004859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.004889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.022794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.022824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.033182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.033213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.051348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.051374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.062995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.063025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.076964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.076995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.087448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.087475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.100986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.101016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.070 [2024-11-02 14:55:07.118166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.070 [2024-11-02 14:55:07.118197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.129704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.129735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.146405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.146432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.158514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.158558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.173276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.173330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.193686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.193717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.211012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.211041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.221637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.221682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.237654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.237684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.251967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.251997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.262049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.262078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.278569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.278610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.292434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.292461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.303450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.303477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.316703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.316732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.327699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.327728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.342705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.342734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.354751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.354790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 10565.00 IOPS, 82.54 MiB/s [2024-11-02T13:55:07.384Z] [2024-11-02 14:55:07.368855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.368885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.329 [2024-11-02 14:55:07.379384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.329 [2024-11-02 14:55:07.379411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.393021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.393051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.410819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.410849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.421618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.421647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.437873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.437902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.455054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.455083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.465439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.465466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.481680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.481711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.499599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.499629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.511090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.511120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.525539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.525566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.541870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.541901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.557618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.557648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.574677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.574707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.590054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.590084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.606840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.606870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.617414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.617441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.587 [2024-11-02 14:55:07.633946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.587 [2024-11-02 14:55:07.633984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.649271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.649314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.658899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.658926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.672044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.672074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.684450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.684477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.701832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.701862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.716001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.716031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.726568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.726597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.742864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.742893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.756930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.756961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.767098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.767128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.780717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.780746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.798825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.798855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.809630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.809660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.825541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.825584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.842164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.842194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.856631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.856661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.868200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.868230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.880492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.880519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.846 [2024-11-02 14:55:07.898216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.846 [2024-11-02 14:55:07.898254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.913128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.913158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.923761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.923791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.936807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.936838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.947175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.947205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.960398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.960425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.972474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.972500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:07.989611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:07.989641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.005689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.005719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.023097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.023126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.033288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.033330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.049606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.049636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.063709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.063739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.074153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.074183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.090290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.090330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.104378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.104404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.124395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.124423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.137078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.137108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.105 [2024-11-02 14:55:08.153669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.105 [2024-11-02 14:55:08.153698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.169987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.170016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.185757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.185787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.202175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.202205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.218119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.218149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.233146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.233177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.243754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.243785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.256924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.256954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.274396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.274424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.289020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.289052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.299252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.299309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.312794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.312824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.330288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.330331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.343686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.343717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.354369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.354395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 10545.67 IOPS, 82.39 MiB/s [2024-11-02T13:55:08.419Z] [2024-11-02 14:55:08.370360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.370387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.384275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.384321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.395084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.395114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.364 [2024-11-02 14:55:08.410366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.364 [2024-11-02 14:55:08.410390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.424421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.424447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.444681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.444710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.455404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.455430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.468944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.468975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.486686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.486716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.501102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.501131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.511649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.511679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.524918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.524948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.544451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.544478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.561956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.561985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.577952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.577982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.593902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.593932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.608881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.608911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.618675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.618705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.634816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.634846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.646159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.646188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.622 [2024-11-02 14:55:08.662286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.622 [2024-11-02 14:55:08.662328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.678419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.678446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.689272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.689314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.705058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.705098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.722911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.722941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.733442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.733469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.750159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.750190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.765628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.765658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.780786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.780818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.790961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.790991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.807107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.807137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.819203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.819234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.830853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.830883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.844885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.844914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.854861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.854891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.867428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.867457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.881053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.881083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.891919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.891949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.905139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.905168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.919776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.919815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.881 [2024-11-02 14:55:08.930132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.881 [2024-11-02 14:55:08.930162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:08.946575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:08.946614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:08.960604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:08.960642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:08.980482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:08.980509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:08.998495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:08.998538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.009280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.009323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.027341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.027367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.038225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.038254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.054775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.054804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.066715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.066745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.081143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.081172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.091785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.091815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.105717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.105748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.122288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.122330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.133652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.133683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.150419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.150459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.163594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.163620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.174171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.174198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.140 [2024-11-02 14:55:09.188948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.140 [2024-11-02 14:55:09.188978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.199115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.199142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.213173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.213203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.230107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.230149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.241193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.241223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.257801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.257831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.275561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.275591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.286074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.286103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.302429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.302456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.314321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.314345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.330888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.330918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.341873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.341903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.356740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.356769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.367392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.367418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 10533.00 IOPS, 82.29 MiB/s [2024-11-02T13:55:09.453Z] [2024-11-02 14:55:09.380337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.380365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.392404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.392430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.404728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.404758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.421866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.421897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.398 [2024-11-02 14:55:09.437912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.398 [2024-11-02 14:55:09.437942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.455025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.455056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.465728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.465758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.481621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.481651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.497823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.497853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.513571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.513601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.531144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.531174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.541945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.541974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.557716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.557746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.573012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.573043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.583590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.583632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.596612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.596642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.614132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.614163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.627857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.627886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.638124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.661 [2024-11-02 14:55:09.638154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.661 [2024-11-02 14:55:09.654205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.662 [2024-11-02 14:55:09.654235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.662 [2024-11-02 14:55:09.668180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.662 [2024-11-02 14:55:09.668211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.662 [2024-11-02 14:55:09.678635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.662 [2024-11-02 14:55:09.678664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.662 [2024-11-02 14:55:09.694301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.662 [2024-11-02 14:55:09.694328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.662 [2024-11-02 14:55:09.708561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.662 [2024-11-02 14:55:09.708591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.719492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.719517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.732575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.732617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.744422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.744449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.756701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.756730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.767606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.767636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.781018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.781047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.797931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.797961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.814009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.814038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.831318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.831344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.841833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.841863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.858566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.858596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.872465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.872506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.883181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.883210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.896030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.896060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.907848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.907877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.919698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.919727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.931519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.931563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.943900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.943929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.955947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.955977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.922 [2024-11-02 14:55:09.968002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.922 [2024-11-02 14:55:09.968032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.180 [2024-11-02 14:55:09.980705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.180 [2024-11-02 14:55:09.980734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.180 [2024-11-02 14:55:09.996876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.180 [2024-11-02 14:55:09.996905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.180 [2024-11-02 14:55:10.016807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.180 [2024-11-02 14:55:10.016858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.180 [2024-11-02 14:55:10.034177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.034218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.045370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.045405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.061648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.061680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.075904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.075935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.086499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.086526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.102170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.102201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.116411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.116438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.126865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.126895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.139698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.139728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.151374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.151401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.163480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.163507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.175424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.175452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.188884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.188914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.199947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.199976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.213131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.213161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.181 [2024-11-02 14:55:10.230332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.181 [2024-11-02 14:55:10.230360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.241229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.241271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.257323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.257348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.274015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.274045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.289103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.289133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.299456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.299481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.312370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.312397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.324343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.324371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.336061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.336090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.348252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.348316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.360786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.360815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 10532.00 IOPS, 82.28 MiB/s [2024-11-02T13:55:10.494Z] [2024-11-02 14:55:10.377495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.377523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.384265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.384309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 00:39:18.439 Latency(us) 00:39:18.439 [2024-11-02T13:55:10.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.439 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:18.439 Nvme1n1 : 5.01 10531.22 82.28 0.00 0.00 12136.34 3131.16 19806.44 00:39:18.439 [2024-11-02T13:55:10.494Z] =================================================================================================================== 00:39:18.439 [2024-11-02T13:55:10.494Z] Total : 10531.22 82.28 0.00 0.00 12136.34 3131.16 19806.44 00:39:18.439 [2024-11-02 14:55:10.392254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.392306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.400264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.400306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.408318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.408370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.416321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.416372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.424319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.424370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.432320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.432383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.440303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.440352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.448326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.448376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.456311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.456360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.464312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.464359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.472321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.472373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.480325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.480379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.439 [2024-11-02 14:55:10.488338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.439 [2024-11-02 14:55:10.488391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.496327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.496375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.504330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.504376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.512320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.512369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.520303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.520350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.528344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.528389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.536267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.536307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.544264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.544295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.552313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.552361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.560313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.560359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.568280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.568335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.576245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.576278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.584319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.584380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.592316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.592361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.600248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.600282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.608246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.608280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 [2024-11-02 14:55:10.616246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.698 [2024-11-02 14:55:10.616280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1573670) - No such process 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1573670 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.698 delay0 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.698 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:18.698 [2024-11-02 14:55:10.738588] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:26.806 Initializing NVMe Controllers 00:39:26.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:26.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:26.806 Initialization complete. Launching workers. 00:39:26.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 14626 00:39:26.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14782, failed to submit 105 00:39:26.806 success 14670, unsuccessful 112, failed 0 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.806 rmmod nvme_tcp 00:39:26.806 rmmod nvme_fabrics 00:39:26.806 rmmod nvme_keyring 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 1572343 ']' 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 1572343 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1572343 ']' 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1572343 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572343 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572343' 00:39:26.806 killing process with pid 1572343 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1572343 00:39:26.806 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1572343 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.806 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.195 00:39:28.195 real 0m28.750s 00:39:28.195 user 0m40.526s 00:39:28.195 sys 0m10.764s 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:28.195 ************************************ 00:39:28.195 END TEST nvmf_zcopy 00:39:28.195 ************************************ 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:28.195 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 ************************************ 00:39:28.454 START TEST nvmf_nmic 00:39:28.454 ************************************ 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:28.454 * Looking for test storage... 00:39:28.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.454 --rc genhtml_branch_coverage=1 00:39:28.454 --rc genhtml_function_coverage=1 00:39:28.454 --rc genhtml_legend=1 00:39:28.454 --rc geninfo_all_blocks=1 00:39:28.454 --rc geninfo_unexecuted_blocks=1 00:39:28.454 00:39:28.454 ' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.454 --rc genhtml_branch_coverage=1 00:39:28.454 --rc genhtml_function_coverage=1 00:39:28.454 --rc genhtml_legend=1 00:39:28.454 --rc geninfo_all_blocks=1 00:39:28.454 --rc geninfo_unexecuted_blocks=1 00:39:28.454 00:39:28.454 ' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.454 --rc genhtml_branch_coverage=1 00:39:28.454 --rc genhtml_function_coverage=1 00:39:28.454 --rc genhtml_legend=1 00:39:28.454 --rc geninfo_all_blocks=1 00:39:28.454 --rc geninfo_unexecuted_blocks=1 00:39:28.454 00:39:28.454 ' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.454 --rc genhtml_branch_coverage=1 00:39:28.454 --rc genhtml_function_coverage=1 00:39:28.454 --rc genhtml_legend=1 00:39:28.454 --rc geninfo_all_blocks=1 00:39:28.454 --rc geninfo_unexecuted_blocks=1 00:39:28.454 00:39:28.454 ' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:28.454 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:28.455 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:30.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:30.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.357 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:30.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:30.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:30.358 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:30.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:30.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:39:30.617 00:39:30.617 --- 10.0.0.2 ping statistics --- 00:39:30.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.617 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:30.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:30.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:39:30.617 00:39:30.617 --- 10.0.0.1 ping statistics --- 00:39:30.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.617 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=1577061 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 1577061 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1577061 ']' 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:30.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:30.617 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.617 [2024-11-02 14:55:22.609517] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:30.617 [2024-11-02 14:55:22.610658] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:30.617 [2024-11-02 14:55:22.610716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:30.875 [2024-11-02 14:55:22.678593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:30.875 [2024-11-02 14:55:22.770686] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:30.875 [2024-11-02 14:55:22.770749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:30.875 [2024-11-02 14:55:22.770762] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:30.875 [2024-11-02 14:55:22.770773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:30.875 [2024-11-02 14:55:22.770782] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:30.875 [2024-11-02 14:55:22.770877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:30.875 [2024-11-02 14:55:22.770942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:30.875 [2024-11-02 14:55:22.771008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:30.875 [2024-11-02 14:55:22.771010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.875 [2024-11-02 14:55:22.875929] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:30.875 [2024-11-02 14:55:22.876382] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:30.875 [2024-11-02 14:55:22.876907] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:30.875 [2024-11-02 14:55:22.877115] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:30.875 [2024-11-02 14:55:22.878522] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:30.875 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.876 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:30.876 [2024-11-02 14:55:22.923698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 Malloc0 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 [2024-11-02 14:55:22.975888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:31.143 test case1: single bdev can't be used in multiple subsystems 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 [2024-11-02 14:55:22.999643] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:31.143 [2024-11-02 14:55:22.999672] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:31.143 [2024-11-02 14:55:22.999704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.143 request: 00:39:31.143 { 00:39:31.143 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:31.143 "namespace": { 00:39:31.143 "bdev_name": "Malloc0", 00:39:31.143 "no_auto_visible": false 00:39:31.143 }, 00:39:31.143 "method": "nvmf_subsystem_add_ns", 00:39:31.143 "req_id": 1 00:39:31.143 } 00:39:31.143 Got JSON-RPC error response 00:39:31.143 response: 00:39:31.143 { 00:39:31.143 "code": -32602, 00:39:31.143 "message": "Invalid parameters" 00:39:31.143 } 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:31.143 Adding namespace failed - expected result. 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:31.143 test case2: host connect to nvmf target in multiple paths 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:31.143 [2024-11-02 14:55:23.007741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:31.143 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:31.404 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:31.404 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:31.404 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:31.404 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:31.404 14:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:33.300 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:33.558 [global] 00:39:33.558 thread=1 00:39:33.558 invalidate=1 00:39:33.558 rw=write 00:39:33.558 time_based=1 00:39:33.558 runtime=1 00:39:33.558 ioengine=libaio 00:39:33.558 direct=1 00:39:33.558 bs=4096 00:39:33.558 iodepth=1 00:39:33.558 norandommap=0 00:39:33.558 numjobs=1 00:39:33.558 00:39:33.558 verify_dump=1 00:39:33.558 verify_backlog=512 00:39:33.558 verify_state_save=0 00:39:33.558 do_verify=1 00:39:33.558 verify=crc32c-intel 00:39:33.558 [job0] 00:39:33.558 filename=/dev/nvme0n1 00:39:33.558 Could not set queue depth (nvme0n1) 00:39:33.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.558 fio-3.35 00:39:33.558 Starting 1 thread 00:39:34.931 00:39:34.931 job0: (groupid=0, jobs=1): err= 0: pid=1577546: Sat Nov 2 14:55:26 2024 00:39:34.931 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:39:34.931 slat (nsec): min=6241, max=32005, avg=16694.82, stdev=7684.38 00:39:34.931 clat (usec): min=40493, max=42018, avg=41455.09, stdev=546.04 00:39:34.931 lat (usec): min=40499, max=42034, avg=41471.78, stdev=547.99 00:39:34.931 clat percentiles (usec): 00:39:34.931 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:34.931 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:39:34.931 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:34.931 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:34.931 | 99.99th=[42206] 00:39:34.931 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:34.931 slat (nsec): min=5261, max=27562, avg=6382.04, stdev=1841.98 00:39:34.931 clat (usec): min=172, max=463, avg=188.16, stdev=14.98 00:39:34.931 lat (usec): min=178, max=491, avg=194.54, stdev=15.80 00:39:34.931 clat percentiles (usec): 00:39:34.931 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:39:34.931 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:39:34.931 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:39:34.931 | 99.00th=[ 215], 99.50th=[ 237], 99.90th=[ 465], 99.95th=[ 465], 00:39:34.931 | 99.99th=[ 465] 00:39:34.931 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:34.931 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:34.931 lat (usec) : 250=95.69%, 500=0.19% 00:39:34.931 lat (msec) : 50=4.12% 00:39:34.931 cpu : usr=0.00%, sys=0.49%, ctx=534, majf=0, minf=1 00:39:34.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:34.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.931 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:34.931 00:39:34.931 Run status group 0 (all jobs): 00:39:34.931 READ: bw=86.9KiB/s (89.0kB/s), 86.9KiB/s-86.9KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1013-1013msec 00:39:34.931 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:39:34.931 00:39:34.931 Disk stats (read/write): 00:39:34.931 nvme0n1: ios=69/512, merge=0/0, ticks=820/88, in_queue=908, util=91.98% 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:34.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.931 rmmod nvme_tcp 00:39:34.931 rmmod nvme_fabrics 00:39:34.931 rmmod nvme_keyring 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 1577061 ']' 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 1577061 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1577061 ']' 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1577061 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:34.931 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577061 00:39:35.190 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:35.190 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:35.190 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577061' 00:39:35.190 killing process with pid 1577061 00:39:35.190 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1577061 00:39:35.190 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1577061 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.448 14:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.347 00:39:37.347 real 0m9.074s 00:39:37.347 user 0m16.839s 00:39:37.347 sys 0m3.239s 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:37.347 ************************************ 00:39:37.347 END TEST nvmf_nmic 00:39:37.347 ************************************ 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.347 ************************************ 00:39:37.347 START TEST nvmf_fio_target 00:39:37.347 ************************************ 00:39:37.347 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:37.606 * Looking for test storage... 00:39:37.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:37.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.606 --rc genhtml_branch_coverage=1 00:39:37.606 --rc genhtml_function_coverage=1 00:39:37.606 --rc genhtml_legend=1 00:39:37.606 --rc geninfo_all_blocks=1 00:39:37.606 --rc geninfo_unexecuted_blocks=1 00:39:37.606 00:39:37.606 ' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:37.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.606 --rc genhtml_branch_coverage=1 00:39:37.606 --rc genhtml_function_coverage=1 00:39:37.606 --rc genhtml_legend=1 00:39:37.606 --rc geninfo_all_blocks=1 00:39:37.606 --rc geninfo_unexecuted_blocks=1 00:39:37.606 00:39:37.606 ' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:37.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.606 --rc genhtml_branch_coverage=1 00:39:37.606 --rc genhtml_function_coverage=1 00:39:37.606 --rc genhtml_legend=1 00:39:37.606 --rc geninfo_all_blocks=1 00:39:37.606 --rc geninfo_unexecuted_blocks=1 00:39:37.606 00:39:37.606 ' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:37.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.606 --rc genhtml_branch_coverage=1 00:39:37.606 --rc genhtml_function_coverage=1 00:39:37.606 --rc genhtml_legend=1 00:39:37.606 --rc geninfo_all_blocks=1 00:39:37.606 --rc geninfo_unexecuted_blocks=1 00:39:37.606 00:39:37.606 ' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.606 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.607 14:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.506 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:39.507 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:39.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:39.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:39.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.507 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:39.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:39:39.766 00:39:39.766 --- 10.0.0.2 ping statistics --- 00:39:39.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.766 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:39:39.766 00:39:39.766 --- 10.0.0.1 ping statistics --- 00:39:39.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.766 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=1579642 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:39.766 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 1579642 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1579642 ']' 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:39.767 14:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:39.767 [2024-11-02 14:55:31.738322] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:39.767 [2024-11-02 14:55:31.739396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:39.767 [2024-11-02 14:55:31.739450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.767 [2024-11-02 14:55:31.809700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:40.025 [2024-11-02 14:55:31.901144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.025 [2024-11-02 14:55:31.901210] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.025 [2024-11-02 14:55:31.901226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.025 [2024-11-02 14:55:31.901249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.025 [2024-11-02 14:55:31.901268] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.025 [2024-11-02 14:55:31.901352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.025 [2024-11-02 14:55:31.901425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.025 [2024-11-02 14:55:31.901516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:40.025 [2024-11-02 14:55:31.901519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.025 [2024-11-02 14:55:32.006360] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.025 [2024-11-02 14:55:32.006922] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:40.025 [2024-11-02 14:55:32.007568] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.025 [2024-11-02 14:55:32.007812] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:40.025 [2024-11-02 14:55:32.009484] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.025 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:40.311 [2024-11-02 14:55:32.326303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.594 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:40.594 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:40.594 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.160 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:41.160 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.418 14:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:41.418 14:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.677 14:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:41.677 14:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:41.935 14:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:42.193 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:42.193 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:42.451 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:42.451 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.017 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:43.017 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:43.275 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:43.533 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:43.533 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:43.791 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:43.791 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:44.049 14:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:44.306 [2024-11-02 14:55:36.326408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:44.306 14:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:44.871 14:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:44.871 14:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:45.130 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:47.672 14:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:47.672 [global] 00:39:47.672 thread=1 00:39:47.672 invalidate=1 00:39:47.672 rw=write 00:39:47.672 time_based=1 00:39:47.672 runtime=1 00:39:47.672 ioengine=libaio 00:39:47.672 direct=1 00:39:47.672 bs=4096 00:39:47.672 iodepth=1 00:39:47.672 norandommap=0 00:39:47.672 numjobs=1 00:39:47.672 00:39:47.672 verify_dump=1 00:39:47.672 verify_backlog=512 00:39:47.672 verify_state_save=0 00:39:47.672 do_verify=1 00:39:47.672 verify=crc32c-intel 00:39:47.672 [job0] 00:39:47.672 filename=/dev/nvme0n1 00:39:47.672 [job1] 00:39:47.672 filename=/dev/nvme0n2 00:39:47.672 [job2] 00:39:47.672 filename=/dev/nvme0n3 00:39:47.672 [job3] 00:39:47.672 filename=/dev/nvme0n4 00:39:47.672 Could not set queue depth (nvme0n1) 00:39:47.672 Could not set queue depth (nvme0n2) 00:39:47.672 Could not set queue depth (nvme0n3) 00:39:47.672 Could not set queue depth (nvme0n4) 00:39:47.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.672 fio-3.35 00:39:47.672 Starting 4 threads 00:39:48.604 00:39:48.605 job0: (groupid=0, jobs=1): err= 0: pid=1580707: Sat Nov 2 14:55:40 2024 00:39:48.605 read: IOPS=1168, BW=4675KiB/s (4788kB/s)(4680KiB/1001msec) 00:39:48.605 slat (nsec): min=5763, max=44853, avg=13286.82, stdev=5789.42 00:39:48.605 clat (usec): min=364, max=41764, avg=470.63, stdev=1208.67 00:39:48.605 lat (usec): min=370, max=41770, avg=483.91, stdev=1208.52 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 371], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:39:48.605 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 441], 00:39:48.605 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 478], 95.00th=[ 494], 00:39:48.605 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[41681], 00:39:48.605 | 99.99th=[41681] 00:39:48.605 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:39:48.605 slat (usec): min=6, max=1514, avg=15.60, stdev=45.77 00:39:48.605 clat (usec): min=203, max=2559, avg=260.27, stdev=78.11 00:39:48.605 lat (usec): min=213, max=2606, avg=275.87, stdev=92.15 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:39:48.605 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 262], 00:39:48.605 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 326], 00:39:48.605 | 99.00th=[ 420], 99.50th=[ 474], 99.90th=[ 988], 99.95th=[ 2573], 00:39:48.605 | 99.99th=[ 2573] 00:39:48.605 bw ( KiB/s): min= 6536, max= 6536, per=41.49%, avg=6536.00, stdev= 0.00, samples=1 00:39:48.605 iops : min= 1634, max= 1634, avg=1634.00, stdev= 0.00, samples=1 00:39:48.605 lat (usec) : 250=26.76%, 500=71.10%, 750=1.92%, 1000=0.15% 00:39:48.605 lat (msec) : 4=0.04%, 50=0.04% 00:39:48.605 cpu : usr=2.80%, sys=4.80%, ctx=2710, majf=0, minf=2 00:39:48.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 issued rwts: total=1170,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.605 job1: (groupid=0, jobs=1): err= 0: pid=1580708: Sat Nov 2 14:55:40 2024 00:39:48.605 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:39:48.605 slat (nsec): min=7967, max=48084, avg=25005.30, stdev=11117.92 00:39:48.605 clat (usec): min=732, max=42220, avg=39407.18, stdev=8440.68 00:39:48.605 lat (usec): min=750, max=42228, avg=39432.18, stdev=8442.13 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 734], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:39:48.605 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:48.605 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:39:48.605 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.605 | 99.99th=[42206] 00:39:48.605 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:39:48.605 slat (nsec): min=8156, max=33253, avg=9561.02, stdev=2169.98 00:39:48.605 clat (usec): min=202, max=452, avg=245.84, stdev=22.51 00:39:48.605 lat (usec): min=211, max=476, avg=255.40, stdev=22.77 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:39:48.605 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:39:48.605 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:39:48.605 | 99.00th=[ 322], 99.50th=[ 379], 99.90th=[ 453], 99.95th=[ 453], 00:39:48.605 | 99.99th=[ 453] 00:39:48.605 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:48.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:48.605 lat (usec) : 250=62.24%, 500=33.46%, 750=0.19% 00:39:48.605 lat (msec) : 50=4.11% 00:39:48.605 cpu : usr=0.19%, sys=0.77%, ctx=536, majf=0, minf=2 00:39:48.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.605 job2: (groupid=0, jobs=1): err= 0: pid=1580709: Sat Nov 2 14:55:40 2024 00:39:48.605 read: IOPS=514, BW=2059KiB/s (2108kB/s)(2108KiB/1024msec) 00:39:48.605 slat (nsec): min=6405, max=64252, avg=18455.71, stdev=9600.21 00:39:48.605 clat (usec): min=335, max=41443, avg=1389.73, stdev=6054.26 00:39:48.605 lat (usec): min=347, max=41457, avg=1408.19, stdev=6054.72 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 396], 20.00th=[ 408], 00:39:48.605 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 474], 60.00th=[ 486], 00:39:48.605 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 562], 95.00th=[ 594], 00:39:48.605 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:39:48.605 | 99.99th=[41681] 00:39:48.605 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:39:48.605 slat (nsec): min=6754, max=60942, avg=13238.42, stdev=6672.81 00:39:48.605 clat (usec): min=200, max=1064, avg=255.37, stdev=59.86 00:39:48.605 lat (usec): min=212, max=1076, avg=268.61, stdev=60.86 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:39:48.605 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:39:48.605 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 338], 00:39:48.605 | 99.00th=[ 453], 99.50th=[ 515], 99.90th=[ 930], 99.95th=[ 1057], 00:39:48.605 | 99.99th=[ 1057] 00:39:48.605 bw ( KiB/s): min= 640, max= 7552, per=26.00%, avg=4096.00, stdev=4887.52, samples=2 00:39:48.605 iops : min= 160, max= 1888, avg=1024.00, stdev=1221.88, samples=2 00:39:48.605 lat (usec) : 250=42.62%, 500=46.68%, 750=9.67%, 1000=0.19% 00:39:48.605 lat (msec) : 2=0.06%, 50=0.77% 00:39:48.605 cpu : usr=0.98%, sys=2.44%, ctx=1553, majf=0, minf=1 00:39:48.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.605 job3: (groupid=0, jobs=1): err= 0: pid=1580710: Sat Nov 2 14:55:40 2024 00:39:48.605 read: IOPS=518, BW=2075KiB/s (2125kB/s)(2108KiB/1016msec) 00:39:48.605 slat (nsec): min=4784, max=66076, avg=15287.72, stdev=8044.14 00:39:48.605 clat (usec): min=321, max=41222, avg=1411.15, stdev=6299.04 00:39:48.605 lat (usec): min=334, max=41242, avg=1426.43, stdev=6300.09 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 379], 00:39:48.605 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 416], 00:39:48.605 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 486], 00:39:48.605 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:48.605 | 99.99th=[41157] 00:39:48.605 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:39:48.605 slat (nsec): min=6805, max=44614, avg=13530.88, stdev=6306.84 00:39:48.605 clat (usec): min=197, max=2071, avg=238.33, stdev=73.90 00:39:48.605 lat (usec): min=207, max=2089, avg=251.86, stdev=75.47 00:39:48.605 clat percentiles (usec): 00:39:48.605 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:39:48.605 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:39:48.605 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 326], 00:39:48.605 | 99.00th=[ 392], 99.50th=[ 396], 99.90th=[ 1237], 99.95th=[ 2073], 00:39:48.605 | 99.99th=[ 2073] 00:39:48.605 bw ( KiB/s): min= 8192, max= 8192, per=52.00%, avg=8192.00, stdev= 0.00, samples=1 00:39:48.605 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:48.605 lat (usec) : 250=57.77%, 500=40.43%, 750=0.84% 00:39:48.605 lat (msec) : 2=0.06%, 4=0.06%, 50=0.84% 00:39:48.605 cpu : usr=0.79%, sys=2.46%, ctx=1552, majf=0, minf=1 00:39:48.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.605 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.605 00:39:48.605 Run status group 0 (all jobs): 00:39:48.605 READ: bw=8642KiB/s (8850kB/s), 88.5KiB/s-4675KiB/s (90.6kB/s-4788kB/s), io=8988KiB (9204kB), run=1001-1040msec 00:39:48.605 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:39:48.605 00:39:48.605 Disk stats (read/write): 00:39:48.605 nvme0n1: ios=1075/1211, merge=0/0, ticks=534/311, in_queue=845, util=85.67% 00:39:48.605 nvme0n2: ios=67/512, merge=0/0, ticks=1300/127, in_queue=1427, util=89.83% 00:39:48.605 nvme0n3: ios=545/1024, merge=0/0, ticks=1418/245, in_queue=1663, util=93.84% 00:39:48.605 nvme0n4: ios=544/1024, merge=0/0, ticks=1473/241, in_queue=1714, util=94.53% 00:39:48.605 14:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:48.605 [global] 00:39:48.605 thread=1 00:39:48.605 invalidate=1 00:39:48.605 rw=randwrite 00:39:48.605 time_based=1 00:39:48.605 runtime=1 00:39:48.605 ioengine=libaio 00:39:48.605 direct=1 00:39:48.605 bs=4096 00:39:48.605 iodepth=1 00:39:48.605 norandommap=0 00:39:48.605 numjobs=1 00:39:48.605 00:39:48.605 verify_dump=1 00:39:48.605 verify_backlog=512 00:39:48.605 verify_state_save=0 00:39:48.605 do_verify=1 00:39:48.605 verify=crc32c-intel 00:39:48.605 [job0] 00:39:48.605 filename=/dev/nvme0n1 00:39:48.605 [job1] 00:39:48.605 filename=/dev/nvme0n2 00:39:48.605 [job2] 00:39:48.605 filename=/dev/nvme0n3 00:39:48.605 [job3] 00:39:48.605 filename=/dev/nvme0n4 00:39:48.863 Could not set queue depth (nvme0n1) 00:39:48.863 Could not set queue depth (nvme0n2) 00:39:48.863 Could not set queue depth (nvme0n3) 00:39:48.863 Could not set queue depth (nvme0n4) 00:39:48.863 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.863 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.863 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.863 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.863 fio-3.35 00:39:48.863 Starting 4 threads 00:39:50.237 00:39:50.237 job0: (groupid=0, jobs=1): err= 0: pid=1580940: Sat Nov 2 14:55:42 2024 00:39:50.237 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:39:50.237 slat (nsec): min=7300, max=14750, avg=13631.81, stdev=1505.27 00:39:50.237 clat (usec): min=40941, max=42076, avg=41846.69, stdev=357.31 00:39:50.237 lat (usec): min=40955, max=42090, avg=41860.32, stdev=357.16 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:39:50.237 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:50.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:50.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.237 | 99.99th=[42206] 00:39:50.237 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:39:50.237 slat (nsec): min=5916, max=26866, avg=7484.80, stdev=2710.99 00:39:50.237 clat (usec): min=193, max=376, avg=229.99, stdev=17.95 00:39:50.237 lat (usec): min=199, max=392, avg=237.48, stdev=18.58 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:39:50.237 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:39:50.237 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 260], 00:39:50.237 | 99.00th=[ 277], 99.50th=[ 330], 99.90th=[ 375], 99.95th=[ 375], 00:39:50.237 | 99.99th=[ 375] 00:39:50.237 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:39:50.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:50.237 lat (usec) : 250=87.62%, 500=8.44% 00:39:50.237 lat (msec) : 50=3.94% 00:39:50.237 cpu : usr=0.30%, sys=0.30%, ctx=533, majf=0, minf=2 00:39:50.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.237 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.237 job1: (groupid=0, jobs=1): err= 0: pid=1580941: Sat Nov 2 14:55:42 2024 00:39:50.237 read: IOPS=1335, BW=5341KiB/s (5469kB/s)(5384KiB/1008msec) 00:39:50.237 slat (nsec): min=4523, max=40775, avg=8691.28, stdev=4549.00 00:39:50.237 clat (usec): min=281, max=41249, avg=475.97, stdev=2209.69 00:39:50.237 lat (usec): min=287, max=41255, avg=484.66, stdev=2209.61 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:39:50.237 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 363], 00:39:50.237 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 429], 00:39:50.237 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[41157], 99.95th=[41157], 00:39:50.237 | 99.99th=[41157] 00:39:50.237 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:39:50.237 slat (nsec): min=6235, max=35502, avg=8432.80, stdev=2876.33 00:39:50.237 clat (usec): min=171, max=428, avg=217.84, stdev=25.99 00:39:50.237 lat (usec): min=178, max=438, avg=226.27, stdev=26.45 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 196], 00:39:50.237 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:39:50.237 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:39:50.237 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 412], 99.95th=[ 429], 00:39:50.237 | 99.99th=[ 429] 00:39:50.237 bw ( KiB/s): min= 4096, max= 8192, per=34.53%, avg=6144.00, stdev=2896.31, samples=2 00:39:50.237 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:39:50.237 lat (usec) : 250=48.72%, 500=50.49%, 750=0.62% 00:39:50.237 lat (msec) : 2=0.03%, 50=0.14% 00:39:50.237 cpu : usr=1.59%, sys=2.98%, ctx=2883, majf=0, minf=1 00:39:50.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.237 issued rwts: total=1346,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.237 job2: (groupid=0, jobs=1): err= 0: pid=1580942: Sat Nov 2 14:55:42 2024 00:39:50.237 read: IOPS=1027, BW=4112KiB/s (4211kB/s)(4116KiB/1001msec) 00:39:50.237 slat (nsec): min=4663, max=40828, avg=12838.14, stdev=3573.67 00:39:50.237 clat (usec): min=318, max=41434, avg=575.91, stdev=2203.34 00:39:50.237 lat (usec): min=324, max=41440, avg=588.75, stdev=2203.27 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 383], 00:39:50.237 | 30.00th=[ 396], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 465], 00:39:50.237 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 635], 00:39:50.237 | 99.00th=[ 717], 99.50th=[ 857], 99.90th=[41157], 99.95th=[41681], 00:39:50.237 | 99.99th=[41681] 00:39:50.237 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:39:50.237 slat (nsec): min=6090, max=41395, avg=8750.17, stdev=3803.12 00:39:50.237 clat (usec): min=190, max=641, avg=243.41, stdev=50.21 00:39:50.237 lat (usec): min=197, max=649, avg=252.16, stdev=51.25 00:39:50.237 clat percentiles (usec): 00:39:50.237 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:39:50.237 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:39:50.237 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 306], 95.00th=[ 379], 00:39:50.237 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 474], 99.95th=[ 644], 00:39:50.237 | 99.99th=[ 644] 00:39:50.238 bw ( KiB/s): min= 6554, max= 6554, per=36.84%, avg=6554.00, stdev= 0.00, samples=1 00:39:50.238 iops : min= 1638, max= 1638, avg=1638.00, stdev= 0.00, samples=1 00:39:50.238 lat (usec) : 250=46.00%, 500=42.53%, 750=11.15%, 1000=0.12% 00:39:50.238 lat (msec) : 2=0.08%, 50=0.12% 00:39:50.238 cpu : usr=1.20%, sys=2.80%, ctx=2567, majf=0, minf=1 00:39:50.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.238 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.238 job3: (groupid=0, jobs=1): err= 0: pid=1580943: Sat Nov 2 14:55:42 2024 00:39:50.238 read: IOPS=505, BW=2023KiB/s (2072kB/s)(2096KiB/1036msec) 00:39:50.238 slat (nsec): min=5473, max=39205, avg=14205.34, stdev=3804.92 00:39:50.238 clat (usec): min=381, max=41362, avg=1428.83, stdev=6066.88 00:39:50.238 lat (usec): min=395, max=41369, avg=1443.03, stdev=6066.78 00:39:50.238 clat percentiles (usec): 00:39:50.238 | 1.00th=[ 396], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 461], 00:39:50.238 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 506], 00:39:50.238 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 611], 00:39:50.238 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:50.238 | 99.99th=[41157] 00:39:50.238 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:39:50.238 slat (nsec): min=6469, max=32086, avg=9037.93, stdev=3281.49 00:39:50.238 clat (usec): min=200, max=529, avg=259.42, stdev=47.22 00:39:50.238 lat (usec): min=209, max=542, avg=268.45, stdev=47.95 00:39:50.238 clat percentiles (usec): 00:39:50.238 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:39:50.238 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:39:50.238 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 388], 00:39:50.238 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 510], 99.95th=[ 529], 00:39:50.238 | 99.99th=[ 529] 00:39:50.238 bw ( KiB/s): min= 1352, max= 6826, per=22.98%, avg=4089.00, stdev=3870.70, samples=2 00:39:50.238 iops : min= 338, max= 1706, avg=1022.00, stdev=967.32, samples=2 00:39:50.238 lat (usec) : 250=38.70%, 500=45.80%, 750=14.73% 00:39:50.238 lat (msec) : 50=0.78% 00:39:50.238 cpu : usr=1.26%, sys=1.16%, ctx=1548, majf=0, minf=1 00:39:50.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.238 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.238 00:39:50.238 Run status group 0 (all jobs): 00:39:50.238 READ: bw=11.0MiB/s (11.5MB/s), 83.8KiB/s-5341KiB/s (85.8kB/s-5469kB/s), io=11.4MiB (12.0MB), run=1001-1036msec 00:39:50.238 WRITE: bw=17.4MiB/s (18.2MB/s), 2044KiB/s-6138KiB/s (2093kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:39:50.238 00:39:50.238 Disk stats (read/write): 00:39:50.238 nvme0n1: ios=67/512, merge=0/0, ticks=751/111, in_queue=862, util=87.37% 00:39:50.238 nvme0n2: ios=1280/1536, merge=0/0, ticks=1458/314, in_queue=1772, util=98.27% 00:39:50.238 nvme0n3: ios=966/1024, merge=0/0, ticks=1540/251, in_queue=1791, util=98.33% 00:39:50.238 nvme0n4: ios=519/1024, merge=0/0, ticks=538/263, in_queue=801, util=89.70% 00:39:50.238 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:50.238 [global] 00:39:50.238 thread=1 00:39:50.238 invalidate=1 00:39:50.238 rw=write 00:39:50.238 time_based=1 00:39:50.238 runtime=1 00:39:50.238 ioengine=libaio 00:39:50.238 direct=1 00:39:50.238 bs=4096 00:39:50.238 iodepth=128 00:39:50.238 norandommap=0 00:39:50.238 numjobs=1 00:39:50.238 00:39:50.238 verify_dump=1 00:39:50.238 verify_backlog=512 00:39:50.238 verify_state_save=0 00:39:50.238 do_verify=1 00:39:50.238 verify=crc32c-intel 00:39:50.238 [job0] 00:39:50.238 filename=/dev/nvme0n1 00:39:50.238 [job1] 00:39:50.238 filename=/dev/nvme0n2 00:39:50.238 [job2] 00:39:50.238 filename=/dev/nvme0n3 00:39:50.238 [job3] 00:39:50.238 filename=/dev/nvme0n4 00:39:50.238 Could not set queue depth (nvme0n1) 00:39:50.238 Could not set queue depth (nvme0n2) 00:39:50.238 Could not set queue depth (nvme0n3) 00:39:50.238 Could not set queue depth (nvme0n4) 00:39:50.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.496 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.496 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.496 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.496 fio-3.35 00:39:50.496 Starting 4 threads 00:39:51.872 00:39:51.872 job0: (groupid=0, jobs=1): err= 0: pid=1581167: Sat Nov 2 14:55:43 2024 00:39:51.872 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.1MiB/1009msec) 00:39:51.872 slat (usec): min=2, max=10334, avg=121.87, stdev=704.55 00:39:51.872 clat (usec): min=5229, max=51641, avg=16337.19, stdev=6292.55 00:39:51.872 lat (usec): min=5234, max=51645, avg=16459.06, stdev=6308.51 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 6456], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[11338], 00:39:51.872 | 30.00th=[13304], 40.00th=[14746], 50.00th=[15664], 60.00th=[16581], 00:39:51.872 | 70.00th=[18482], 80.00th=[19792], 90.00th=[22938], 95.00th=[26870], 00:39:51.872 | 99.00th=[39584], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:39:51.872 | 99.99th=[51643] 00:39:51.872 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:39:51.872 slat (usec): min=3, max=10678, avg=163.70, stdev=831.13 00:39:51.872 clat (usec): min=4100, max=59799, avg=21460.83, stdev=13779.28 00:39:51.872 lat (usec): min=4108, max=59807, avg=21624.53, stdev=13875.23 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 4146], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11600], 00:39:51.872 | 30.00th=[12518], 40.00th=[13829], 50.00th=[15270], 60.00th=[17433], 00:39:51.872 | 70.00th=[22152], 80.00th=[35390], 90.00th=[45351], 95.00th=[51643], 00:39:51.872 | 99.00th=[57410], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:39:51.872 | 99.99th=[60031] 00:39:51.872 bw ( KiB/s): min=11144, max=16632, per=24.59%, avg=13888.00, stdev=3880.60, samples=2 00:39:51.872 iops : min= 2786, max= 4158, avg=3472.00, stdev=970.15, samples=2 00:39:51.872 lat (msec) : 10=10.78%, 20=63.35%, 50=22.01%, 100=3.87% 00:39:51.872 cpu : usr=3.47%, sys=7.04%, ctx=357, majf=0, minf=1 00:39:51.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:51.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.872 issued rwts: total=3087,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.872 job1: (groupid=0, jobs=1): err= 0: pid=1581168: Sat Nov 2 14:55:43 2024 00:39:51.872 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:39:51.872 slat (usec): min=2, max=15392, avg=120.35, stdev=729.42 00:39:51.872 clat (usec): min=6928, max=36641, avg=16472.91, stdev=5838.87 00:39:51.872 lat (usec): min=7019, max=36672, avg=16593.26, stdev=5884.60 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11338], 00:39:51.872 | 30.00th=[12125], 40.00th=[13304], 50.00th=[15008], 60.00th=[17171], 00:39:51.872 | 70.00th=[19006], 80.00th=[21103], 90.00th=[24773], 95.00th=[28705], 00:39:51.872 | 99.00th=[31851], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:39:51.872 | 99.99th=[36439] 00:39:51.872 write: IOPS=4115, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec); 0 zone resets 00:39:51.872 slat (usec): min=3, max=11684, avg=112.12, stdev=786.89 00:39:51.872 clat (usec): min=6935, max=51622, avg=14400.44, stdev=5243.98 00:39:51.872 lat (usec): min=6941, max=51628, avg=14512.56, stdev=5308.72 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:39:51.872 | 30.00th=[11207], 40.00th=[12649], 50.00th=[13566], 60.00th=[14222], 00:39:51.872 | 70.00th=[15270], 80.00th=[16319], 90.00th=[20317], 95.00th=[23462], 00:39:51.872 | 99.00th=[33817], 99.50th=[42730], 99.90th=[51643], 99.95th=[51643], 00:39:51.872 | 99.99th=[51643] 00:39:51.872 bw ( KiB/s): min=14232, max=18536, per=29.02%, avg=16384.00, stdev=3043.39, samples=2 00:39:51.872 iops : min= 3558, max= 4634, avg=4096.00, stdev=760.85, samples=2 00:39:51.872 lat (msec) : 10=8.32%, 20=74.63%, 50=16.92%, 100=0.13% 00:39:51.872 cpu : usr=3.97%, sys=9.33%, ctx=235, majf=0, minf=1 00:39:51.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:51.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.872 issued rwts: total=4096,4153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.872 job2: (groupid=0, jobs=1): err= 0: pid=1581169: Sat Nov 2 14:55:43 2024 00:39:51.872 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:39:51.872 slat (usec): min=2, max=15954, avg=159.04, stdev=1138.47 00:39:51.872 clat (usec): min=4756, max=82102, avg=21032.11, stdev=10644.53 00:39:51.872 lat (usec): min=4761, max=82114, avg=21191.15, stdev=10669.25 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 5342], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[11338], 00:39:51.872 | 30.00th=[13042], 40.00th=[15401], 50.00th=[19530], 60.00th=[23200], 00:39:51.872 | 70.00th=[26608], 80.00th=[29230], 90.00th=[35390], 95.00th=[39584], 00:39:51.872 | 99.00th=[47449], 99.50th=[49021], 99.90th=[82314], 99.95th=[82314], 00:39:51.872 | 99.99th=[82314] 00:39:51.872 write: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1010msec); 0 zone resets 00:39:51.872 slat (usec): min=3, max=16855, avg=198.48, stdev=1122.06 00:39:51.872 clat (usec): min=770, max=97311, avg=25032.06, stdev=20222.79 00:39:51.872 lat (usec): min=5451, max=97344, avg=25230.53, stdev=20367.99 00:39:51.872 clat percentiles (usec): 00:39:51.872 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[12125], 20.00th=[12649], 00:39:51.872 | 30.00th=[13042], 40.00th=[13435], 50.00th=[16057], 60.00th=[21890], 00:39:51.872 | 70.00th=[26084], 80.00th=[29230], 90.00th=[65274], 95.00th=[76022], 00:39:51.872 | 99.00th=[91751], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:39:51.872 | 99.99th=[96994] 00:39:51.872 bw ( KiB/s): min=10184, max=12288, per=19.90%, avg=11236.00, stdev=1487.75, samples=2 00:39:51.872 iops : min= 2546, max= 3072, avg=2809.00, stdev=371.94, samples=2 00:39:51.872 lat (usec) : 1000=0.02% 00:39:51.872 lat (msec) : 10=10.24%, 20=43.37%, 50=39.66%, 100=6.71% 00:39:51.873 cpu : usr=2.08%, sys=3.67%, ctx=225, majf=0, minf=1 00:39:51.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:51.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.873 issued rwts: total=2560,2937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.873 job3: (groupid=0, jobs=1): err= 0: pid=1581170: Sat Nov 2 14:55:43 2024 00:39:51.873 read: IOPS=3275, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1010msec) 00:39:51.873 slat (usec): min=2, max=25624, avg=131.25, stdev=907.23 00:39:51.873 clat (usec): min=357, max=41064, avg=18387.83, stdev=6069.15 00:39:51.873 lat (usec): min=365, max=41089, avg=18519.08, stdev=6105.99 00:39:51.873 clat percentiles (usec): 00:39:51.873 | 1.00th=[ 824], 5.00th=[11338], 10.00th=[12518], 20.00th=[14615], 00:39:51.873 | 30.00th=[15139], 40.00th=[15926], 50.00th=[16909], 60.00th=[19006], 00:39:51.873 | 70.00th=[20579], 80.00th=[22938], 90.00th=[25297], 95.00th=[28181], 00:39:51.873 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:51.873 | 99.99th=[41157] 00:39:51.873 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:39:51.873 slat (usec): min=3, max=13182, avg=145.46, stdev=956.56 00:39:51.873 clat (usec): min=1291, max=37731, avg=18560.07, stdev=5964.32 00:39:51.873 lat (usec): min=1299, max=37779, avg=18705.53, stdev=6038.55 00:39:51.873 clat percentiles (usec): 00:39:51.873 | 1.00th=[ 7308], 5.00th=[10552], 10.00th=[11338], 20.00th=[12911], 00:39:51.873 | 30.00th=[15533], 40.00th=[16581], 50.00th=[17171], 60.00th=[19792], 00:39:51.873 | 70.00th=[20579], 80.00th=[24773], 90.00th=[27395], 95.00th=[29492], 00:39:51.873 | 99.00th=[32637], 99.50th=[33817], 99.90th=[35914], 99.95th=[36439], 00:39:51.873 | 99.99th=[37487] 00:39:51.873 bw ( KiB/s): min=13944, max=14728, per=25.39%, avg=14336.00, stdev=554.37, samples=2 00:39:51.873 iops : min= 3486, max= 3682, avg=3584.00, stdev=138.59, samples=2 00:39:51.873 lat (usec) : 500=0.07%, 750=0.35%, 1000=0.17% 00:39:51.873 lat (msec) : 2=0.07%, 10=2.52%, 20=61.77%, 50=35.04% 00:39:51.873 cpu : usr=4.16%, sys=7.43%, ctx=274, majf=0, minf=1 00:39:51.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:51.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.873 issued rwts: total=3308,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.873 00:39:51.873 Run status group 0 (all jobs): 00:39:51.873 READ: bw=50.5MiB/s (52.9MB/s), 9.90MiB/s-15.9MiB/s (10.4MB/s-16.6MB/s), io=51.0MiB (53.5MB), run=1009-1010msec 00:39:51.873 WRITE: bw=55.1MiB/s (57.8MB/s), 11.4MiB/s-16.1MiB/s (11.9MB/s-16.9MB/s), io=55.7MiB (58.4MB), run=1009-1010msec 00:39:51.873 00:39:51.873 Disk stats (read/write): 00:39:51.873 nvme0n1: ios=3122/3159, merge=0/0, ticks=24994/28174, in_queue=53168, util=86.37% 00:39:51.873 nvme0n2: ios=3096/3501, merge=0/0, ticks=28223/23228, in_queue=51451, util=89.75% 00:39:51.873 nvme0n3: ios=2069/2403, merge=0/0, ticks=22129/27778, in_queue=49907, util=92.71% 00:39:51.873 nvme0n4: ios=2805/3072, merge=0/0, ticks=29216/30931, in_queue=60147, util=94.23% 00:39:51.873 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:51.873 [global] 00:39:51.873 thread=1 00:39:51.873 invalidate=1 00:39:51.873 rw=randwrite 00:39:51.873 time_based=1 00:39:51.873 runtime=1 00:39:51.873 ioengine=libaio 00:39:51.873 direct=1 00:39:51.873 bs=4096 00:39:51.873 iodepth=128 00:39:51.873 norandommap=0 00:39:51.873 numjobs=1 00:39:51.873 00:39:51.873 verify_dump=1 00:39:51.873 verify_backlog=512 00:39:51.873 verify_state_save=0 00:39:51.873 do_verify=1 00:39:51.873 verify=crc32c-intel 00:39:51.873 [job0] 00:39:51.873 filename=/dev/nvme0n1 00:39:51.873 [job1] 00:39:51.873 filename=/dev/nvme0n2 00:39:51.873 [job2] 00:39:51.873 filename=/dev/nvme0n3 00:39:51.873 [job3] 00:39:51.873 filename=/dev/nvme0n4 00:39:51.873 Could not set queue depth (nvme0n1) 00:39:51.873 Could not set queue depth (nvme0n2) 00:39:51.873 Could not set queue depth (nvme0n3) 00:39:51.873 Could not set queue depth (nvme0n4) 00:39:51.873 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.873 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.873 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.873 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.873 fio-3.35 00:39:51.873 Starting 4 threads 00:39:53.248 00:39:53.248 job0: (groupid=0, jobs=1): err= 0: pid=1581402: Sat Nov 2 14:55:44 2024 00:39:53.248 read: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec) 00:39:53.248 slat (usec): min=2, max=15109, avg=118.29, stdev=810.17 00:39:53.248 clat (usec): min=3266, max=38693, avg=14827.10, stdev=5042.55 00:39:53.248 lat (usec): min=5005, max=38698, avg=14945.39, stdev=5089.39 00:39:53.248 clat percentiles (usec): 00:39:53.248 | 1.00th=[ 7439], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11076], 00:39:53.248 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13829], 60.00th=[15795], 00:39:53.248 | 70.00th=[16581], 80.00th=[17171], 90.00th=[19792], 95.00th=[22938], 00:39:53.248 | 99.00th=[35390], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:39:53.248 | 99.99th=[38536] 00:39:53.248 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:39:53.248 slat (usec): min=4, max=13156, avg=115.82, stdev=659.58 00:39:53.248 clat (usec): min=1157, max=56125, avg=16379.39, stdev=9364.37 00:39:53.248 lat (usec): min=1167, max=56132, avg=16495.21, stdev=9423.87 00:39:53.248 clat percentiles (usec): 00:39:53.248 | 1.00th=[ 4948], 5.00th=[ 6980], 10.00th=[ 8094], 20.00th=[ 9896], 00:39:53.248 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12125], 60.00th=[14222], 00:39:53.248 | 70.00th=[18220], 80.00th=[23987], 90.00th=[28705], 95.00th=[34866], 00:39:53.248 | 99.00th=[51643], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:39:53.248 | 99.99th=[56361] 00:39:53.248 bw ( KiB/s): min=15088, max=17680, per=24.27%, avg=16384.00, stdev=1832.82, samples=2 00:39:53.248 iops : min= 3772, max= 4420, avg=4096.00, stdev=458.21, samples=2 00:39:53.248 lat (msec) : 2=0.04%, 4=0.12%, 10=16.36%, 20=65.92%, 50=16.98% 00:39:53.248 lat (msec) : 100=0.58% 00:39:53.248 cpu : usr=4.78%, sys=7.37%, ctx=411, majf=0, minf=1 00:39:53.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:53.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.248 issued rwts: total=4044,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.248 job1: (groupid=0, jobs=1): err= 0: pid=1581403: Sat Nov 2 14:55:44 2024 00:39:53.248 read: IOPS=5081, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1001msec) 00:39:53.248 slat (usec): min=3, max=5307, avg=95.91, stdev=432.95 00:39:53.248 clat (usec): min=821, max=26154, avg=12673.58, stdev=2219.68 00:39:53.248 lat (usec): min=837, max=26793, avg=12769.49, stdev=2218.02 00:39:53.248 clat percentiles (usec): 00:39:53.248 | 1.00th=[ 6783], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:39:53.248 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:39:53.248 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14353], 95.00th=[15008], 00:39:53.248 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:39:53.248 | 99.99th=[26084] 00:39:53.248 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:39:53.248 slat (usec): min=4, max=5240, avg=90.14, stdev=401.88 00:39:53.248 clat (usec): min=8202, max=20978, avg=12024.04, stdev=1144.58 00:39:53.248 lat (usec): min=8816, max=21210, avg=12114.18, stdev=1110.34 00:39:53.248 clat percentiles (usec): 00:39:53.248 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:39:53.248 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:39:53.248 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13829], 00:39:53.248 | 99.00th=[15795], 99.50th=[16319], 99.90th=[19006], 99.95th=[19006], 00:39:53.248 | 99.99th=[21103] 00:39:53.248 bw ( KiB/s): min=20480, max=20480, per=30.34%, avg=20480.00, stdev= 0.00, samples=1 00:39:53.248 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:39:53.248 lat (usec) : 1000=0.02% 00:39:53.248 lat (msec) : 10=3.96%, 20=94.84%, 50=1.19% 00:39:53.248 cpu : usr=6.20%, sys=9.10%, ctx=592, majf=0, minf=1 00:39:53.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:53.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.249 issued rwts: total=5087,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.249 job2: (groupid=0, jobs=1): err= 0: pid=1581414: Sat Nov 2 14:55:44 2024 00:39:53.249 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:39:53.249 slat (usec): min=2, max=7668, avg=126.44, stdev=663.93 00:39:53.249 clat (usec): min=6661, max=60737, avg=15811.16, stdev=5599.91 00:39:53.249 lat (usec): min=6665, max=62757, avg=15937.60, stdev=5634.43 00:39:53.249 clat percentiles (usec): 00:39:53.249 | 1.00th=[ 7111], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11469], 00:39:53.249 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14353], 60.00th=[15533], 00:39:53.249 | 70.00th=[17695], 80.00th=[19268], 90.00th=[22414], 95.00th=[26608], 00:39:53.249 | 99.00th=[30540], 99.50th=[49021], 99.90th=[53216], 99.95th=[60556], 00:39:53.249 | 99.99th=[60556] 00:39:53.249 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1004msec); 0 zone resets 00:39:53.249 slat (usec): min=3, max=8698, avg=109.70, stdev=559.31 00:39:53.249 clat (usec): min=536, max=35159, avg=14920.31, stdev=4829.60 00:39:53.249 lat (usec): min=4262, max=35177, avg=15030.00, stdev=4855.05 00:39:53.249 clat percentiles (usec): 00:39:53.249 | 1.00th=[ 5342], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[12387], 00:39:53.249 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13698], 60.00th=[14091], 00:39:53.249 | 70.00th=[15270], 80.00th=[17695], 90.00th=[21103], 95.00th=[26346], 00:39:53.249 | 99.00th=[31327], 99.50th=[32375], 99.90th=[33817], 99.95th=[35390], 00:39:53.249 | 99.99th=[35390] 00:39:53.249 bw ( KiB/s): min=12288, max=20439, per=24.24%, avg=16363.50, stdev=5763.63, samples=2 00:39:53.249 iops : min= 3072, max= 5109, avg=4090.50, stdev=1440.38, samples=2 00:39:53.249 lat (usec) : 750=0.01% 00:39:53.249 lat (msec) : 10=7.64%, 20=79.53%, 50=12.62%, 100=0.19% 00:39:53.249 cpu : usr=3.49%, sys=5.38%, ctx=441, majf=0, minf=1 00:39:53.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:53.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.249 issued rwts: total=4096,4160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.249 job3: (groupid=0, jobs=1): err= 0: pid=1581420: Sat Nov 2 14:55:44 2024 00:39:53.249 read: IOPS=3137, BW=12.3MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:39:53.249 slat (usec): min=2, max=15692, avg=145.42, stdev=898.63 00:39:53.249 clat (usec): min=3025, max=86778, avg=16830.37, stdev=8264.36 00:39:53.249 lat (usec): min=3033, max=86783, avg=16975.79, stdev=8375.17 00:39:53.249 clat percentiles (usec): 00:39:53.249 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[11469], 20.00th=[12649], 00:39:53.249 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15139], 60.00th=[15664], 00:39:53.249 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23200], 95.00th=[30802], 00:39:53.249 | 99.00th=[52167], 99.50th=[69731], 99.90th=[86508], 99.95th=[86508], 00:39:53.249 | 99.99th=[86508] 00:39:53.249 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:39:53.249 slat (usec): min=3, max=20369, avg=143.43, stdev=946.83 00:39:53.249 clat (msec): min=6, max=109, avg=20.64, stdev=15.39 00:39:53.249 lat (msec): min=6, max=109, avg=20.78, stdev=15.48 00:39:53.249 clat percentiles (msec): 00:39:53.249 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 14], 00:39:53.249 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 16], 00:39:53.249 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 41], 00:39:53.249 | 99.00th=[ 97], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 110], 00:39:53.249 | 99.99th=[ 110] 00:39:53.249 bw ( KiB/s): min=11208, max=17072, per=20.95%, avg=14140.00, stdev=4146.47, samples=2 00:39:53.249 iops : min= 2802, max= 4268, avg=3535.00, stdev=1036.62, samples=2 00:39:53.249 lat (msec) : 4=0.24%, 10=6.24%, 20=72.68%, 50=18.09%, 100=2.44% 00:39:53.249 lat (msec) : 250=0.33% 00:39:53.249 cpu : usr=3.39%, sys=5.48%, ctx=340, majf=0, minf=1 00:39:53.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:53.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.249 issued rwts: total=3150,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.249 00:39:53.249 Run status group 0 (all jobs): 00:39:53.249 READ: bw=63.7MiB/s (66.7MB/s), 12.3MiB/s-19.9MiB/s (12.8MB/s-20.8MB/s), io=64.0MiB (67.1MB), run=1001-1005msec 00:39:53.249 WRITE: bw=65.9MiB/s (69.1MB/s), 13.9MiB/s-20.0MiB/s (14.6MB/s-20.9MB/s), io=66.2MiB (69.5MB), run=1001-1005msec 00:39:53.249 00:39:53.249 Disk stats (read/write): 00:39:53.249 nvme0n1: ios=3113/3383, merge=0/0, ticks=46924/57009, in_queue=103933, util=99.50% 00:39:53.249 nvme0n2: ios=4144/4559, merge=0/0, ticks=14685/13779, in_queue=28464, util=90.45% 00:39:53.249 nvme0n3: ios=3641/3675, merge=0/0, ticks=19785/17872, in_queue=37657, util=90.07% 00:39:53.249 nvme0n4: ios=2617/2871, merge=0/0, ticks=21930/30437, in_queue=52367, util=94.84% 00:39:53.249 14:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:53.249 14:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1581579 00:39:53.249 14:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:53.249 14:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:53.249 [global] 00:39:53.249 thread=1 00:39:53.249 invalidate=1 00:39:53.249 rw=read 00:39:53.249 time_based=1 00:39:53.249 runtime=10 00:39:53.249 ioengine=libaio 00:39:53.249 direct=1 00:39:53.249 bs=4096 00:39:53.249 iodepth=1 00:39:53.249 norandommap=1 00:39:53.249 numjobs=1 00:39:53.249 00:39:53.249 [job0] 00:39:53.249 filename=/dev/nvme0n1 00:39:53.249 [job1] 00:39:53.249 filename=/dev/nvme0n2 00:39:53.249 [job2] 00:39:53.249 filename=/dev/nvme0n3 00:39:53.249 [job3] 00:39:53.249 filename=/dev/nvme0n4 00:39:53.249 Could not set queue depth (nvme0n1) 00:39:53.249 Could not set queue depth (nvme0n2) 00:39:53.249 Could not set queue depth (nvme0n3) 00:39:53.249 Could not set queue depth (nvme0n4) 00:39:53.249 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.249 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.249 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.249 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:53.249 fio-3.35 00:39:53.249 Starting 4 threads 00:39:56.529 14:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:56.529 14:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:56.529 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3174400, buflen=4096 00:39:56.529 fio: pid=1581748, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:56.787 14:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:56.787 14:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:56.787 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=30765056, buflen=4096 00:39:56.787 fio: pid=1581747, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.044 14:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.044 14:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:57.044 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1683456, buflen=4096 00:39:57.044 fio: pid=1581745, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.303 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31449088, buflen=4096 00:39:57.303 fio: pid=1581746, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:57.303 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.303 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:57.303 00:39:57.303 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581745: Sat Nov 2 14:55:49 2024 00:39:57.303 read: IOPS=116, BW=465KiB/s (476kB/s)(1644KiB/3537msec) 00:39:57.303 slat (usec): min=5, max=5800, avg=24.35, stdev=285.36 00:39:57.303 clat (usec): min=301, max=62457, avg=8521.23, stdev=16391.17 00:39:57.303 lat (usec): min=308, max=62471, avg=8545.54, stdev=16391.17 00:39:57.303 clat percentiles (usec): 00:39:57.303 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:39:57.303 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 371], 60.00th=[ 383], 00:39:57.303 | 70.00th=[ 429], 80.00th=[ 832], 90.00th=[41157], 95.00th=[41157], 00:39:57.303 | 99.00th=[42206], 99.50th=[42206], 99.90th=[62653], 99.95th=[62653], 00:39:57.303 | 99.99th=[62653] 00:39:57.303 bw ( KiB/s): min= 96, max= 2488, per=3.06%, avg=528.00, stdev=960.84, samples=6 00:39:57.303 iops : min= 24, max= 622, avg=132.00, stdev=240.21, samples=6 00:39:57.303 lat (usec) : 500=74.76%, 750=4.85%, 1000=0.24% 00:39:57.303 lat (msec) : 50=19.66%, 100=0.24% 00:39:57.303 cpu : usr=0.00%, sys=0.25%, ctx=414, majf=0, minf=2 00:39:57.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 issued rwts: total=412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.303 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581746: Sat Nov 2 14:55:49 2024 00:39:57.303 read: IOPS=2021, BW=8084KiB/s (8278kB/s)(30.0MiB/3799msec) 00:39:57.303 slat (usec): min=5, max=10148, avg=12.54, stdev=151.78 00:39:57.303 clat (usec): min=278, max=44980, avg=476.34, stdev=2276.26 00:39:57.303 lat (usec): min=284, max=44999, avg=488.88, stdev=2290.50 00:39:57.303 clat percentiles (usec): 00:39:57.303 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:39:57.303 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:39:57.303 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 457], 00:39:57.303 | 99.00th=[ 545], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41681], 00:39:57.303 | 99.99th=[44827] 00:39:57.303 bw ( KiB/s): min= 304, max=11312, per=49.96%, avg=8613.14, stdev=4207.38, samples=7 00:39:57.303 iops : min= 76, max= 2828, avg=2153.29, stdev=1051.84, samples=7 00:39:57.303 lat (usec) : 500=97.60%, 750=1.98%, 1000=0.01% 00:39:57.303 lat (msec) : 2=0.04%, 4=0.03%, 50=0.33% 00:39:57.303 cpu : usr=1.50%, sys=2.92%, ctx=7683, majf=0, minf=1 00:39:57.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 issued rwts: total=7679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.303 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581747: Sat Nov 2 14:55:49 2024 00:39:57.303 read: IOPS=2299, BW=9196KiB/s (9417kB/s)(29.3MiB/3267msec) 00:39:57.303 slat (nsec): min=4885, max=77001, avg=13244.53, stdev=8861.57 00:39:57.303 clat (usec): min=288, max=41376, avg=415.34, stdev=1152.15 00:39:57.303 lat (usec): min=296, max=41390, avg=428.59, stdev=1152.49 00:39:57.303 clat percentiles (usec): 00:39:57.303 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:39:57.303 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 375], 00:39:57.303 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 537], 00:39:57.303 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 2040], 99.95th=[41157], 00:39:57.303 | 99.99th=[41157] 00:39:57.303 bw ( KiB/s): min= 8928, max=11600, per=58.03%, avg=10006.67, stdev=962.00, samples=6 00:39:57.303 iops : min= 2232, max= 2900, avg=2501.67, stdev=240.50, samples=6 00:39:57.303 lat (usec) : 500=91.59%, 750=8.16%, 1000=0.11% 00:39:57.303 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01%, 50=0.08% 00:39:57.303 cpu : usr=1.47%, sys=4.19%, ctx=7513, majf=0, minf=2 00:39:57.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.303 issued rwts: total=7512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.303 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581748: Sat Nov 2 14:55:49 2024 00:39:57.303 read: IOPS=264, BW=1058KiB/s (1084kB/s)(3100KiB/2929msec) 00:39:57.303 slat (nsec): min=4558, max=40416, avg=11957.57, stdev=5034.91 00:39:57.303 clat (usec): min=298, max=41974, avg=3733.85, stdev=11159.61 00:39:57.303 lat (usec): min=303, max=41993, avg=3745.80, stdev=11161.20 00:39:57.303 clat percentiles (usec): 00:39:57.303 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 343], 00:39:57.303 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 392], 00:39:57.303 | 70.00th=[ 424], 80.00th=[ 482], 90.00th=[ 553], 95.00th=[41157], 00:39:57.303 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:57.303 | 99.99th=[42206] 00:39:57.304 bw ( KiB/s): min= 96, max= 2432, per=3.50%, avg=603.20, stdev=1025.62, samples=5 00:39:57.304 iops : min= 24, max= 608, avg=150.80, stdev=256.40, samples=5 00:39:57.304 lat (usec) : 500=83.76%, 750=7.73%, 1000=0.13% 00:39:57.304 lat (msec) : 50=8.25% 00:39:57.304 cpu : usr=0.10%, sys=0.44%, ctx=776, majf=0, minf=2 00:39:57.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.304 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.304 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:57.304 00:39:57.304 Run status group 0 (all jobs): 00:39:57.304 READ: bw=16.8MiB/s (17.7MB/s), 465KiB/s-9196KiB/s (476kB/s-9417kB/s), io=64.0MiB (67.1MB), run=2929-3799msec 00:39:57.304 00:39:57.304 Disk stats (read/write): 00:39:57.304 nvme0n1: ios=407/0, merge=0/0, ticks=3337/0, in_queue=3337, util=95.82% 00:39:57.304 nvme0n2: ios=7673/0, merge=0/0, ticks=3366/0, in_queue=3366, util=96.17% 00:39:57.304 nvme0n3: ios=7549/0, merge=0/0, ticks=3681/0, in_queue=3681, util=99.16% 00:39:57.304 nvme0n4: ios=657/0, merge=0/0, ticks=2839/0, in_queue=2839, util=96.71% 00:39:57.562 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.562 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:57.820 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:57.820 14:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:58.078 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.078 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:58.336 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.336 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:58.594 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:58.594 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1581579 00:39:58.594 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:58.594 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:58.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:58.852 nvmf hotplug test: fio failed as expected 00:39:58.852 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.110 rmmod nvme_tcp 00:39:59.110 rmmod nvme_fabrics 00:39:59.110 rmmod nvme_keyring 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 1579642 ']' 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 1579642 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1579642 ']' 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1579642 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579642 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579642' 00:39:59.110 killing process with pid 1579642 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1579642 00:39:59.110 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1579642 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:59.370 14:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.904 00:40:01.904 real 0m24.061s 00:40:01.904 user 1m6.118s 00:40:01.904 sys 0m10.988s 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:01.904 ************************************ 00:40:01.904 END TEST nvmf_fio_target 00:40:01.904 ************************************ 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:01.904 ************************************ 00:40:01.904 START TEST nvmf_bdevio 00:40:01.904 ************************************ 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:01.904 * Looking for test storage... 00:40:01.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:01.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.904 --rc genhtml_branch_coverage=1 00:40:01.904 --rc genhtml_function_coverage=1 00:40:01.904 --rc genhtml_legend=1 00:40:01.904 --rc geninfo_all_blocks=1 00:40:01.904 --rc geninfo_unexecuted_blocks=1 00:40:01.904 00:40:01.904 ' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:01.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.904 --rc genhtml_branch_coverage=1 00:40:01.904 --rc genhtml_function_coverage=1 00:40:01.904 --rc genhtml_legend=1 00:40:01.904 --rc geninfo_all_blocks=1 00:40:01.904 --rc geninfo_unexecuted_blocks=1 00:40:01.904 00:40:01.904 ' 00:40:01.904 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:01.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.905 --rc genhtml_branch_coverage=1 00:40:01.905 --rc genhtml_function_coverage=1 00:40:01.905 --rc genhtml_legend=1 00:40:01.905 --rc geninfo_all_blocks=1 00:40:01.905 --rc geninfo_unexecuted_blocks=1 00:40:01.905 00:40:01.905 ' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:01.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.905 --rc genhtml_branch_coverage=1 00:40:01.905 --rc genhtml_function_coverage=1 00:40:01.905 --rc genhtml_legend=1 00:40:01.905 --rc geninfo_all_blocks=1 00:40:01.905 --rc geninfo_unexecuted_blocks=1 00:40:01.905 00:40:01.905 ' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:01.905 14:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:03.807 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:03.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:03.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:03.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:03.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:03.808 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:03.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:03.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:40:03.808 00:40:03.809 --- 10.0.0.2 ping statistics --- 00:40:03.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:03.809 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:03.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:03.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:40:03.809 00:40:03.809 --- 10.0.0.1 ping statistics --- 00:40:03.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:03.809 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=1584371 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 1584371 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1584371 ']' 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:03.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:03.809 14:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.068 [2024-11-02 14:55:55.869504] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:04.068 [2024-11-02 14:55:55.870588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:04.068 [2024-11-02 14:55:55.870654] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.068 [2024-11-02 14:55:55.936372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:04.068 [2024-11-02 14:55:56.028157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.068 [2024-11-02 14:55:56.028222] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.068 [2024-11-02 14:55:56.028266] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.068 [2024-11-02 14:55:56.028281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.068 [2024-11-02 14:55:56.028306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.068 [2024-11-02 14:55:56.028410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:04.068 [2024-11-02 14:55:56.028473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:04.068 [2024-11-02 14:55:56.028525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:04.068 [2024-11-02 14:55:56.028528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:04.327 [2024-11-02 14:55:56.135907] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:04.327 [2024-11-02 14:55:56.136145] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:04.327 [2024-11-02 14:55:56.136460] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:04.327 [2024-11-02 14:55:56.137068] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:04.327 [2024-11-02 14:55:56.137343] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.327 [2024-11-02 14:55:56.189320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.327 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.328 Malloc0 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:04.328 [2024-11-02 14:55:56.249488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:04.328 { 00:40:04.328 "params": { 00:40:04.328 "name": "Nvme$subsystem", 00:40:04.328 "trtype": "$TEST_TRANSPORT", 00:40:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:04.328 "adrfam": "ipv4", 00:40:04.328 "trsvcid": "$NVMF_PORT", 00:40:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:04.328 "hdgst": ${hdgst:-false}, 00:40:04.328 "ddgst": ${ddgst:-false} 00:40:04.328 }, 00:40:04.328 "method": "bdev_nvme_attach_controller" 00:40:04.328 } 00:40:04.328 EOF 00:40:04.328 )") 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:40:04.328 14:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:04.328 "params": { 00:40:04.328 "name": "Nvme1", 00:40:04.328 "trtype": "tcp", 00:40:04.328 "traddr": "10.0.0.2", 00:40:04.328 "adrfam": "ipv4", 00:40:04.328 "trsvcid": "4420", 00:40:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:04.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:04.328 "hdgst": false, 00:40:04.328 "ddgst": false 00:40:04.328 }, 00:40:04.328 "method": "bdev_nvme_attach_controller" 00:40:04.328 }' 00:40:04.328 [2024-11-02 14:55:56.299888] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:04.328 [2024-11-02 14:55:56.299959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584400 ] 00:40:04.328 [2024-11-02 14:55:56.360875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:04.587 [2024-11-02 14:55:56.452452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.587 [2024-11-02 14:55:56.452501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:04.587 [2024-11-02 14:55:56.452504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.587 I/O targets: 00:40:04.587 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:04.587 00:40:04.587 00:40:04.587 CUnit - A unit testing framework for C - Version 2.1-3 00:40:04.587 http://cunit.sourceforge.net/ 00:40:04.587 00:40:04.587 00:40:04.587 Suite: bdevio tests on: Nvme1n1 00:40:04.845 Test: blockdev write read block ...passed 00:40:04.845 Test: blockdev write zeroes read block ...passed 00:40:04.845 Test: blockdev write zeroes read no split ...passed 00:40:04.845 Test: blockdev write zeroes read split ...passed 00:40:04.845 Test: blockdev write zeroes read split partial ...passed 00:40:04.845 Test: blockdev reset ...[2024-11-02 14:55:56.788194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:04.845 [2024-11-02 14:55:56.788334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1713e90 (9): Bad file descriptor 00:40:04.845 [2024-11-02 14:55:56.835099] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:04.845 passed 00:40:04.845 Test: blockdev write read 8 blocks ...passed 00:40:04.845 Test: blockdev write read size > 128k ...passed 00:40:04.845 Test: blockdev write read invalid size ...passed 00:40:05.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.104 Test: blockdev write read max offset ...passed 00:40:05.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.104 Test: blockdev writev readv 8 blocks ...passed 00:40:05.104 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.104 Test: blockdev writev readv block ...passed 00:40:05.104 Test: blockdev writev readv size > 128k ...passed 00:40:05.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.104 Test: blockdev comparev and writev ...[2024-11-02 14:55:57.134378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.134415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.134440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.134457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.134904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.135376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.135401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.135423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.135438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.135871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.135895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:05.104 [2024-11-02 14:55:57.135915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:05.104 [2024-11-02 14:55:57.135931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:05.362 passed 00:40:05.362 Test: blockdev nvme passthru rw ...passed 00:40:05.362 Test: blockdev nvme passthru vendor specific ...[2024-11-02 14:55:57.217586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.362 [2024-11-02 14:55:57.217612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:05.362 [2024-11-02 14:55:57.217799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.362 [2024-11-02 14:55:57.217823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:05.362 [2024-11-02 14:55:57.218015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.362 [2024-11-02 14:55:57.218038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:05.362 [2024-11-02 14:55:57.218222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:05.362 [2024-11-02 14:55:57.218245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:05.362 passed 00:40:05.362 Test: blockdev nvme admin passthru ...passed 00:40:05.362 Test: blockdev copy ...passed 00:40:05.362 00:40:05.362 Run Summary: Type Total Ran Passed Failed Inactive 00:40:05.362 suites 1 1 n/a 0 0 00:40:05.362 tests 23 23 23 0 0 00:40:05.362 asserts 152 152 152 0 n/a 00:40:05.362 00:40:05.362 Elapsed time = 1.290 seconds 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:05.620 rmmod nvme_tcp 00:40:05.620 rmmod nvme_fabrics 00:40:05.620 rmmod nvme_keyring 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 1584371 ']' 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 1584371 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1584371 ']' 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1584371 00:40:05.620 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1584371 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1584371' 00:40:05.621 killing process with pid 1584371 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1584371 00:40:05.621 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1584371 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.879 14:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:08.429 00:40:08.429 real 0m6.365s 00:40:08.429 user 0m8.440s 00:40:08.429 sys 0m2.508s 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:08.429 ************************************ 00:40:08.429 END TEST nvmf_bdevio 00:40:08.429 ************************************ 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:08.429 00:40:08.429 real 3m55.381s 00:40:08.429 user 8m47.171s 00:40:08.429 sys 1m28.542s 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:08.429 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:08.429 ************************************ 00:40:08.429 END TEST nvmf_target_core_interrupt_mode 00:40:08.429 ************************************ 00:40:08.429 14:55:59 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:08.429 14:55:59 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:08.429 14:55:59 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:08.429 14:55:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:08.429 ************************************ 00:40:08.429 START TEST nvmf_interrupt 00:40:08.429 ************************************ 00:40:08.429 14:55:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:08.429 * Looking for test storage... 00:40:08.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:08.429 14:55:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:08.429 14:55:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:40:08.429 14:55:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:08.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.429 --rc genhtml_branch_coverage=1 00:40:08.429 --rc genhtml_function_coverage=1 00:40:08.429 --rc genhtml_legend=1 00:40:08.429 --rc geninfo_all_blocks=1 00:40:08.429 --rc geninfo_unexecuted_blocks=1 00:40:08.429 00:40:08.429 ' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:08.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.429 --rc genhtml_branch_coverage=1 00:40:08.429 --rc genhtml_function_coverage=1 00:40:08.429 --rc genhtml_legend=1 00:40:08.429 --rc geninfo_all_blocks=1 00:40:08.429 --rc geninfo_unexecuted_blocks=1 00:40:08.429 00:40:08.429 ' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:08.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.429 --rc genhtml_branch_coverage=1 00:40:08.429 --rc genhtml_function_coverage=1 00:40:08.429 --rc genhtml_legend=1 00:40:08.429 --rc geninfo_all_blocks=1 00:40:08.429 --rc geninfo_unexecuted_blocks=1 00:40:08.429 00:40:08.429 ' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:08.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.429 --rc genhtml_branch_coverage=1 00:40:08.429 --rc genhtml_function_coverage=1 00:40:08.429 --rc genhtml_legend=1 00:40:08.429 --rc geninfo_all_blocks=1 00:40:08.429 --rc geninfo_unexecuted_blocks=1 00:40:08.429 00:40:08.429 ' 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:08.429 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:08.430 14:56:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:10.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:10.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:10.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:10.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:40:10.373 00:40:10.373 --- 10.0.0.2 ping statistics --- 00:40:10.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.373 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:40:10.373 00:40:10.373 --- 10.0.0.1 ping statistics --- 00:40:10.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.373 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.373 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=1586599 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 1586599 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1586599 ']' 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:10.374 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.374 [2024-11-02 14:56:02.337254] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.374 [2024-11-02 14:56:02.338375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:10.374 [2024-11-02 14:56:02.338428] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.374 [2024-11-02 14:56:02.405009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:10.657 [2024-11-02 14:56:02.490908] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.657 [2024-11-02 14:56:02.490960] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.657 [2024-11-02 14:56:02.490981] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.657 [2024-11-02 14:56:02.490992] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.657 [2024-11-02 14:56:02.491002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.657 [2024-11-02 14:56:02.491057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.657 [2024-11-02 14:56:02.491061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.657 [2024-11-02 14:56:02.576164] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.657 [2024-11-02 14:56:02.576216] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.657 [2024-11-02 14:56:02.576428] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:10.657 5000+0 records in 00:40:10.657 5000+0 records out 00:40:10.657 10240000 bytes (10 MB, 9.8 MiB) copied, 0.014625 s, 700 MB/s 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.657 AIO0 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.657 [2024-11-02 14:56:02.687672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.657 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.915 [2024-11-02 14:56:02.723919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1586599 0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 0 idle 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586599 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.29 reactor_0' 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586599 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.29 reactor_0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.915 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1586599 1 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 1 idle 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:10.916 14:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586603 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1' 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586603 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1586652 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1586599 0 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1586599 0 busy 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:11.174 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586599 root 20 0 128.2g 47232 33792 R 93.3 0.1 0:00.43 reactor_0' 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586599 root 20 0 128.2g 47232 33792 R 93.3 0.1 0:00.43 reactor_0 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1586599 1 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1586599 1 busy 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:11.433 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586603 root 20 0 128.2g 47232 33792 R 93.3 0.1 0:00.23 reactor_1' 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586603 root 20 0 128.2g 47232 33792 R 93.3 0.1 0:00.23 reactor_1 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.434 14:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1586652 00:40:21.405 Initializing NVMe Controllers 00:40:21.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:21.405 Controller IO queue size 256, less than required. 00:40:21.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:21.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:21.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:21.405 Initialization complete. Launching workers. 00:40:21.405 ======================================================== 00:40:21.405 Latency(us) 00:40:21.405 Device Information : IOPS MiB/s Average min max 00:40:21.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13835.30 54.04 18514.86 4492.73 22605.44 00:40:21.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13251.70 51.76 19331.65 4004.06 22156.61 00:40:21.405 ======================================================== 00:40:21.405 Total : 27087.00 105.81 18914.46 4004.06 22605.44 00:40:21.405 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1586599 0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 0 idle 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586599 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.24 reactor_0' 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586599 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.24 reactor_0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1586599 1 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 1 idle 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:21.405 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586603 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.97 reactor_1' 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586603 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.97 reactor_1 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:21.664 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:21.923 14:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:21.923 14:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:21.923 14:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:21.923 14:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:21.923 14:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1586599 0 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 0 idle 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:23.841 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:23.842 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:23.842 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:23.842 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:23.842 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586599 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.33 reactor_0' 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586599 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.33 reactor_0 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1586599 1 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1586599 1 idle 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1586599 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1586599 -w 256 00:40:24.100 14:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1586603 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.00 reactor_1' 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1586603 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.00 reactor_1 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.100 14:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:24.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:24.360 rmmod nvme_tcp 00:40:24.360 rmmod nvme_fabrics 00:40:24.360 rmmod nvme_keyring 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 1586599 ']' 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 1586599 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1586599 ']' 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1586599 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586599 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586599' 00:40:24.360 killing process with pid 1586599 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1586599 00:40:24.360 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1586599 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:24.619 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:40:24.878 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.878 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.878 14:56:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.878 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:24.878 14:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.783 14:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:26.783 00:40:26.783 real 0m18.794s 00:40:26.783 user 0m36.603s 00:40:26.783 sys 0m6.716s 00:40:26.783 14:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:26.783 14:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:26.783 ************************************ 00:40:26.783 END TEST nvmf_interrupt 00:40:26.783 ************************************ 00:40:26.783 00:40:26.783 real 33m3.896s 00:40:26.783 user 87m16.305s 00:40:26.783 sys 8m9.062s 00:40:26.783 14:56:18 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:26.783 14:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:26.783 ************************************ 00:40:26.783 END TEST nvmf_tcp 00:40:26.783 ************************************ 00:40:26.783 14:56:18 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:26.783 14:56:18 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:26.783 14:56:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:26.783 14:56:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:26.783 14:56:18 -- common/autotest_common.sh@10 -- # set +x 00:40:26.783 ************************************ 00:40:26.783 START TEST spdkcli_nvmf_tcp 00:40:26.783 ************************************ 00:40:26.783 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:26.783 * Looking for test storage... 00:40:26.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:26.783 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:26.783 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:40:26.783 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.042 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:27.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.043 --rc genhtml_branch_coverage=1 00:40:27.043 --rc genhtml_function_coverage=1 00:40:27.043 --rc genhtml_legend=1 00:40:27.043 --rc geninfo_all_blocks=1 00:40:27.043 --rc geninfo_unexecuted_blocks=1 00:40:27.043 00:40:27.043 ' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:27.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.043 --rc genhtml_branch_coverage=1 00:40:27.043 --rc genhtml_function_coverage=1 00:40:27.043 --rc genhtml_legend=1 00:40:27.043 --rc geninfo_all_blocks=1 00:40:27.043 --rc geninfo_unexecuted_blocks=1 00:40:27.043 00:40:27.043 ' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:27.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.043 --rc genhtml_branch_coverage=1 00:40:27.043 --rc genhtml_function_coverage=1 00:40:27.043 --rc genhtml_legend=1 00:40:27.043 --rc geninfo_all_blocks=1 00:40:27.043 --rc geninfo_unexecuted_blocks=1 00:40:27.043 00:40:27.043 ' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:27.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.043 --rc genhtml_branch_coverage=1 00:40:27.043 --rc genhtml_function_coverage=1 00:40:27.043 --rc genhtml_legend=1 00:40:27.043 --rc geninfo_all_blocks=1 00:40:27.043 --rc geninfo_unexecuted_blocks=1 00:40:27.043 00:40:27.043 ' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1588641 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1588641 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1588641 ']' 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:27.043 14:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.043 [2024-11-02 14:56:18.988081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:27.043 [2024-11-02 14:56:18.988172] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588641 ] 00:40:27.043 [2024-11-02 14:56:19.052206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:27.302 [2024-11-02 14:56:19.150467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.302 [2024-11-02 14:56:19.150473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.302 14:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:27.302 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:27.302 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:27.302 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:27.302 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:27.302 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:27.302 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:27.302 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.302 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.302 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:27.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:27.302 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:27.302 ' 00:40:30.583 [2024-11-02 14:56:21.973494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.516 [2024-11-02 14:56:23.265974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:34.044 [2024-11-02 14:56:25.653521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:35.942 [2024-11-02 14:56:27.703922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:37.315 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:37.315 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:37.315 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:37.315 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:37.316 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:37.316 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:37.316 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:37.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:37.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:37.316 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:37.574 14:56:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:37.831 14:56:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.089 14:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:38.089 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:38.089 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:38.089 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:38.089 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:38.089 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:38.089 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:38.089 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:38.089 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:38.090 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:38.090 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:38.090 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:38.090 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:38.090 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:38.090 ' 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:43.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:43.352 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:43.352 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:43.352 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:43.353 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:43.353 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:43.353 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:43.353 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:43.353 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:43.353 14:56:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:43.353 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.353 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1588641 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1588641 ']' 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1588641 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588641 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588641' 00:40:43.611 killing process with pid 1588641 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1588641 00:40:43.611 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1588641 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1588641 ']' 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1588641 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1588641 ']' 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1588641 00:40:43.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1588641) - No such process 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1588641 is not found' 00:40:43.870 Process with pid 1588641 is not found 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:43.870 00:40:43.870 real 0m16.914s 00:40:43.870 user 0m36.237s 00:40:43.870 sys 0m0.786s 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:43.870 14:56:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.870 ************************************ 00:40:43.870 END TEST spdkcli_nvmf_tcp 00:40:43.870 ************************************ 00:40:43.870 14:56:35 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:43.870 14:56:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:43.870 14:56:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:43.870 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:40:43.870 ************************************ 00:40:43.870 START TEST nvmf_identify_passthru 00:40:43.870 ************************************ 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:43.870 * Looking for test storage... 00:40:43.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.870 --rc genhtml_branch_coverage=1 00:40:43.870 --rc genhtml_function_coverage=1 00:40:43.870 --rc genhtml_legend=1 00:40:43.870 --rc geninfo_all_blocks=1 00:40:43.870 --rc geninfo_unexecuted_blocks=1 00:40:43.870 00:40:43.870 ' 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.870 --rc genhtml_branch_coverage=1 00:40:43.870 --rc genhtml_function_coverage=1 00:40:43.870 --rc genhtml_legend=1 00:40:43.870 --rc geninfo_all_blocks=1 00:40:43.870 --rc geninfo_unexecuted_blocks=1 00:40:43.870 00:40:43.870 ' 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.870 --rc genhtml_branch_coverage=1 00:40:43.870 --rc genhtml_function_coverage=1 00:40:43.870 --rc genhtml_legend=1 00:40:43.870 --rc geninfo_all_blocks=1 00:40:43.870 --rc geninfo_unexecuted_blocks=1 00:40:43.870 00:40:43.870 ' 00:40:43.870 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.870 --rc genhtml_branch_coverage=1 00:40:43.870 --rc genhtml_function_coverage=1 00:40:43.870 --rc genhtml_legend=1 00:40:43.870 --rc geninfo_all_blocks=1 00:40:43.870 --rc geninfo_unexecuted_blocks=1 00:40:43.870 00:40:43.870 ' 00:40:43.870 14:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:43.870 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:43.870 14:56:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:43.870 14:56:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.870 14:56:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.871 14:56:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.871 14:56:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:43.871 14:56:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:43.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:43.871 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:44.128 14:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.128 14:56:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:44.128 14:56:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.128 14:56:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.128 14:56:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.128 14:56:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.128 14:56:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.128 14:56:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.128 14:56:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:44.129 14:56:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.129 14:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.129 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:44.129 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:44.129 14:56:35 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:44.129 14:56:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:46.029 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:46.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:46.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:46.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:46.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.030 14:56:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:40:46.290 00:40:46.290 --- 10.0.0.2 ping statistics --- 00:40:46.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.290 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:40:46.290 00:40:46.290 --- 10.0.0.1 ping statistics --- 00:40:46.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.290 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:46.290 14:56:38 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:40:46.290 14:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:46.290 14:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:50.528 14:56:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:50.528 14:56:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:50.528 14:56:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:50.528 14:56:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1593278 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:54.746 14:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1593278 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1593278 ']' 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:54.746 14:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.746 [2024-11-02 14:56:46.797946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:54.746 [2024-11-02 14:56:46.798053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:55.004 [2024-11-02 14:56:46.872281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:55.004 [2024-11-02 14:56:46.967499] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:55.004 [2024-11-02 14:56:46.967574] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:55.004 [2024-11-02 14:56:46.967601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:55.004 [2024-11-02 14:56:46.967614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:55.004 [2024-11-02 14:56:46.967626] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:55.004 [2024-11-02 14:56:46.967685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.004 [2024-11-02 14:56:46.967739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:55.004 [2024-11-02 14:56:46.967861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:55.004 [2024-11-02 14:56:46.967865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.004 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:55.004 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:55.262 14:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:55.262 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.262 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.262 INFO: Log level set to 20 00:40:55.262 INFO: Requests: 00:40:55.262 { 00:40:55.262 "jsonrpc": "2.0", 00:40:55.262 "method": "nvmf_set_config", 00:40:55.262 "id": 1, 00:40:55.262 "params": { 00:40:55.262 "admin_cmd_passthru": { 00:40:55.262 "identify_ctrlr": true 00:40:55.262 } 00:40:55.263 } 00:40:55.263 } 00:40:55.263 00:40:55.263 INFO: response: 00:40:55.263 { 00:40:55.263 "jsonrpc": "2.0", 00:40:55.263 "id": 1, 00:40:55.263 "result": true 00:40:55.263 } 00:40:55.263 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.263 14:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.263 INFO: Setting log level to 20 00:40:55.263 INFO: Setting log level to 20 00:40:55.263 INFO: Log level set to 20 00:40:55.263 INFO: Log level set to 20 00:40:55.263 INFO: Requests: 00:40:55.263 { 00:40:55.263 "jsonrpc": "2.0", 00:40:55.263 "method": "framework_start_init", 00:40:55.263 "id": 1 00:40:55.263 } 00:40:55.263 00:40:55.263 INFO: Requests: 00:40:55.263 { 00:40:55.263 "jsonrpc": "2.0", 00:40:55.263 "method": "framework_start_init", 00:40:55.263 "id": 1 00:40:55.263 } 00:40:55.263 00:40:55.263 [2024-11-02 14:56:47.168287] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:55.263 INFO: response: 00:40:55.263 { 00:40:55.263 "jsonrpc": "2.0", 00:40:55.263 "id": 1, 00:40:55.263 "result": true 00:40:55.263 } 00:40:55.263 00:40:55.263 INFO: response: 00:40:55.263 { 00:40:55.263 "jsonrpc": "2.0", 00:40:55.263 "id": 1, 00:40:55.263 "result": true 00:40:55.263 } 00:40:55.263 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.263 14:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.263 INFO: Setting log level to 40 00:40:55.263 INFO: Setting log level to 40 00:40:55.263 INFO: Setting log level to 40 00:40:55.263 [2024-11-02 14:56:47.178274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.263 14:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.263 14:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.263 14:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.541 Nvme0n1 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.541 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.541 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.541 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:58.541 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.542 [2024-11-02 14:56:50.068328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.542 [ 00:40:58.542 { 00:40:58.542 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:58.542 "subtype": "Discovery", 00:40:58.542 "listen_addresses": [], 00:40:58.542 "allow_any_host": true, 00:40:58.542 "hosts": [] 00:40:58.542 }, 00:40:58.542 { 00:40:58.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.542 "subtype": "NVMe", 00:40:58.542 "listen_addresses": [ 00:40:58.542 { 00:40:58.542 "trtype": "TCP", 00:40:58.542 "adrfam": "IPv4", 00:40:58.542 "traddr": "10.0.0.2", 00:40:58.542 "trsvcid": "4420" 00:40:58.542 } 00:40:58.542 ], 00:40:58.542 "allow_any_host": true, 00:40:58.542 "hosts": [], 00:40:58.542 "serial_number": "SPDK00000000000001", 00:40:58.542 "model_number": "SPDK bdev Controller", 00:40:58.542 "max_namespaces": 1, 00:40:58.542 "min_cntlid": 1, 00:40:58.542 "max_cntlid": 65519, 00:40:58.542 "namespaces": [ 00:40:58.542 { 00:40:58.542 "nsid": 1, 00:40:58.542 "bdev_name": "Nvme0n1", 00:40:58.542 "name": "Nvme0n1", 00:40:58.542 "nguid": "143BFE9FAB6C4C2398B33094E8E88866", 00:40:58.542 "uuid": "143bfe9f-ab6c-4c23-98b3-3094e8e88866" 00:40:58.542 } 00:40:58.542 ] 00:40:58.542 } 00:40:58.542 ] 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:58.542 14:56:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:58.542 rmmod nvme_tcp 00:40:58.542 rmmod nvme_fabrics 00:40:58.542 rmmod nvme_keyring 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 1593278 ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 1593278 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1593278 ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1593278 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593278 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593278' 00:40:58.542 killing process with pid 1593278 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1593278 00:40:58.542 14:56:50 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1593278 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.441 14:56:52 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.441 14:56:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:00.441 14:56:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.343 14:56:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:02.343 00:41:02.343 real 0m18.453s 00:41:02.343 user 0m27.240s 00:41:02.343 sys 0m2.475s 00:41:02.343 14:56:54 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:02.343 14:56:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 ************************************ 00:41:02.343 END TEST nvmf_identify_passthru 00:41:02.343 ************************************ 00:41:02.343 14:56:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:02.343 14:56:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:02.343 14:56:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:02.343 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 ************************************ 00:41:02.343 START TEST nvmf_dif 00:41:02.343 ************************************ 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:02.343 * Looking for test storage... 00:41:02.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:02.343 14:56:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.343 --rc genhtml_branch_coverage=1 00:41:02.343 --rc genhtml_function_coverage=1 00:41:02.343 --rc genhtml_legend=1 00:41:02.343 --rc geninfo_all_blocks=1 00:41:02.343 --rc geninfo_unexecuted_blocks=1 00:41:02.343 00:41:02.343 ' 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.343 --rc genhtml_branch_coverage=1 00:41:02.343 --rc genhtml_function_coverage=1 00:41:02.343 --rc genhtml_legend=1 00:41:02.343 --rc geninfo_all_blocks=1 00:41:02.343 --rc geninfo_unexecuted_blocks=1 00:41:02.343 00:41:02.343 ' 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.343 --rc genhtml_branch_coverage=1 00:41:02.343 --rc genhtml_function_coverage=1 00:41:02.343 --rc genhtml_legend=1 00:41:02.343 --rc geninfo_all_blocks=1 00:41:02.343 --rc geninfo_unexecuted_blocks=1 00:41:02.343 00:41:02.343 ' 00:41:02.343 14:56:54 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.343 --rc genhtml_branch_coverage=1 00:41:02.343 --rc genhtml_function_coverage=1 00:41:02.343 --rc genhtml_legend=1 00:41:02.343 --rc geninfo_all_blocks=1 00:41:02.343 --rc geninfo_unexecuted_blocks=1 00:41:02.343 00:41:02.343 ' 00:41:02.343 14:56:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:02.344 14:56:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:02.602 14:56:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:02.602 14:56:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:02.602 14:56:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:02.602 14:56:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:02.602 14:56:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.602 14:56:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.602 14:56:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.602 14:56:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:02.602 14:56:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.602 14:56:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:02.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:02.603 14:56:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:02.603 14:56:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:02.603 14:56:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:02.603 14:56:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:02.603 14:56:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.603 14:56:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.603 14:56:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:02.603 14:56:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:02.603 14:56:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:04.504 14:56:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:04.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:04.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:04.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:04.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:04.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:41:04.505 00:41:04.505 --- 10.0.0.2 ping statistics --- 00:41:04.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.505 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:41:04.505 00:41:04.505 --- 10.0.0.1 ping statistics --- 00:41:04.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.505 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:41:04.505 14:56:56 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:05.881 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:05.881 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:05.881 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:05.881 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:05.881 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:05.881 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:05.881 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:05.881 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:05.881 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:05.881 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:05.881 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:05.881 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:05.881 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:05.881 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:05.881 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:05.881 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:05.881 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:05.881 14:56:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:05.881 14:56:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=1596542 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:05.881 14:56:57 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 1596542 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1596542 ']' 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:05.881 14:56:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.881 [2024-11-02 14:56:57.899960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:05.881 [2024-11-02 14:56:57.900042] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.140 [2024-11-02 14:56:57.969330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.140 [2024-11-02 14:56:58.065724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.140 [2024-11-02 14:56:58.065799] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.140 [2024-11-02 14:56:58.065815] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.140 [2024-11-02 14:56:58.065828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.140 [2024-11-02 14:56:58.065840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.140 [2024-11-02 14:56:58.065873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.140 14:56:58 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:06.140 14:56:58 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:06.140 14:56:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:06.140 14:56:58 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:06.140 14:56:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 14:56:58 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:06.399 14:56:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:06.399 14:56:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 [2024-11-02 14:56:58.216653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.399 14:56:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 ************************************ 00:41:06.399 START TEST fio_dif_1_default 00:41:06.399 ************************************ 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 bdev_null0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.399 [2024-11-02 14:56:58.276952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:06.399 { 00:41:06.399 "params": { 00:41:06.399 "name": "Nvme$subsystem", 00:41:06.399 "trtype": "$TEST_TRANSPORT", 00:41:06.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.399 "adrfam": "ipv4", 00:41:06.399 "trsvcid": "$NVMF_PORT", 00:41:06.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.399 "hdgst": ${hdgst:-false}, 00:41:06.399 "ddgst": ${ddgst:-false} 00:41:06.399 }, 00:41:06.399 "method": "bdev_nvme_attach_controller" 00:41:06.399 } 00:41:06.399 EOF 00:41:06.399 )") 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:41:06.399 14:56:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:06.399 "params": { 00:41:06.399 "name": "Nvme0", 00:41:06.399 "trtype": "tcp", 00:41:06.399 "traddr": "10.0.0.2", 00:41:06.399 "adrfam": "ipv4", 00:41:06.399 "trsvcid": "4420", 00:41:06.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.400 "hdgst": false, 00:41:06.400 "ddgst": false 00:41:06.400 }, 00:41:06.400 "method": "bdev_nvme_attach_controller" 00:41:06.400 }' 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:06.400 14:56:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:06.657 fio-3.35 00:41:06.657 Starting 1 thread 00:41:18.855 00:41:18.855 filename0: (groupid=0, jobs=1): err= 0: pid=1596773: Sat Nov 2 14:57:09 2024 00:41:18.855 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:41:18.855 slat (nsec): min=4041, max=50700, avg=9858.59, stdev=4476.12 00:41:18.855 clat (usec): min=729, max=47259, avg=21068.53, stdev=20193.76 00:41:18.855 lat (usec): min=737, max=47277, avg=21078.38, stdev=20194.19 00:41:18.855 clat percentiles (usec): 00:41:18.855 | 1.00th=[ 750], 5.00th=[ 758], 10.00th=[ 775], 20.00th=[ 791], 00:41:18.855 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:41:18.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:18.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:41:18.855 | 99.99th=[47449] 00:41:18.855 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:41:18.855 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:41:18.855 lat (usec) : 750=1.64%, 1000=47.94% 00:41:18.855 lat (msec) : 2=0.21%, 50=50.21% 00:41:18.855 cpu : usr=90.76%, sys=8.93%, ctx=21, majf=0, minf=219 00:41:18.855 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:18.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.855 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:18.855 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:18.855 00:41:18.855 Run status group 0 (all jobs): 00:41:18.855 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10002-10002msec 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 00:41:18.855 real 0m11.283s 00:41:18.855 user 0m10.446s 00:41:18.855 sys 0m1.199s 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 ************************************ 00:41:18.855 END TEST fio_dif_1_default 00:41:18.855 ************************************ 00:41:18.855 14:57:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:18.855 14:57:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:18.855 14:57:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 ************************************ 00:41:18.855 START TEST fio_dif_1_multi_subsystems 00:41:18.855 ************************************ 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 bdev_null0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 [2024-11-02 14:57:09.609473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 bdev_null1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.855 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:18.855 { 00:41:18.855 "params": { 00:41:18.855 "name": "Nvme$subsystem", 00:41:18.855 "trtype": "$TEST_TRANSPORT", 00:41:18.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.856 "adrfam": "ipv4", 00:41:18.856 "trsvcid": "$NVMF_PORT", 00:41:18.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.856 "hdgst": ${hdgst:-false}, 00:41:18.856 "ddgst": ${ddgst:-false} 00:41:18.856 }, 00:41:18.856 "method": "bdev_nvme_attach_controller" 00:41:18.856 } 00:41:18.856 EOF 00:41:18.856 )") 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:18.856 { 00:41:18.856 "params": { 00:41:18.856 "name": "Nvme$subsystem", 00:41:18.856 "trtype": "$TEST_TRANSPORT", 00:41:18.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.856 "adrfam": "ipv4", 00:41:18.856 "trsvcid": "$NVMF_PORT", 00:41:18.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.856 "hdgst": ${hdgst:-false}, 00:41:18.856 "ddgst": ${ddgst:-false} 00:41:18.856 }, 00:41:18.856 "method": "bdev_nvme_attach_controller" 00:41:18.856 } 00:41:18.856 EOF 00:41:18.856 )") 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:18.856 "params": { 00:41:18.856 "name": "Nvme0", 00:41:18.856 "trtype": "tcp", 00:41:18.856 "traddr": "10.0.0.2", 00:41:18.856 "adrfam": "ipv4", 00:41:18.856 "trsvcid": "4420", 00:41:18.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:18.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:18.856 "hdgst": false, 00:41:18.856 "ddgst": false 00:41:18.856 }, 00:41:18.856 "method": "bdev_nvme_attach_controller" 00:41:18.856 },{ 00:41:18.856 "params": { 00:41:18.856 "name": "Nvme1", 00:41:18.856 "trtype": "tcp", 00:41:18.856 "traddr": "10.0.0.2", 00:41:18.856 "adrfam": "ipv4", 00:41:18.856 "trsvcid": "4420", 00:41:18.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:18.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:18.856 "hdgst": false, 00:41:18.856 "ddgst": false 00:41:18.856 }, 00:41:18.856 "method": "bdev_nvme_attach_controller" 00:41:18.856 }' 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:18.856 14:57:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:18.856 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:18.856 fio-3.35 00:41:18.856 Starting 2 threads 00:41:28.872 00:41:28.872 filename0: (groupid=0, jobs=1): err= 0: pid=1598789: Sat Nov 2 14:57:20 2024 00:41:28.872 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10001msec) 00:41:28.872 slat (nsec): min=7254, max=37251, avg=9815.73, stdev=3431.92 00:41:28.872 clat (usec): min=40906, max=44772, avg=41639.17, stdev=525.66 00:41:28.872 lat (usec): min=40913, max=44797, avg=41648.99, stdev=526.06 00:41:28.872 clat percentiles (usec): 00:41:28.872 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:28.872 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:41:28.872 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:28.872 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:41:28.872 | 99.99th=[44827] 00:41:28.872 bw ( KiB/s): min= 352, max= 416, per=33.57%, avg=384.00, stdev=10.67, samples=19 00:41:28.872 iops : min= 88, max= 104, avg=96.00, stdev= 2.67, samples=19 00:41:28.872 lat (msec) : 50=100.00% 00:41:28.872 cpu : usr=95.07%, sys=4.61%, ctx=17, majf=0, minf=63 00:41:28.872 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:28.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.872 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.872 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:28.872 filename1: (groupid=0, jobs=1): err= 0: pid=1598790: Sat Nov 2 14:57:20 2024 00:41:28.872 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:41:28.872 slat (nsec): min=7254, max=35258, avg=9412.07, stdev=3011.54 00:41:28.872 clat (usec): min=804, max=43802, avg=21024.92, stdev=20125.78 00:41:28.872 lat (usec): min=811, max=43828, avg=21034.33, stdev=20125.59 00:41:28.872 clat percentiles (usec): 00:41:28.872 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 857], 00:41:28.872 | 30.00th=[ 865], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:41:28.872 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:28.872 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:41:28.872 | 99.99th=[43779] 00:41:28.872 bw ( KiB/s): min= 672, max= 768, per=66.35%, avg=759.58, stdev=25.78, samples=19 00:41:28.872 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:41:28.872 lat (usec) : 1000=49.26% 00:41:28.872 lat (msec) : 2=0.63%, 50=50.11% 00:41:28.872 cpu : usr=95.14%, sys=4.55%, ctx=11, majf=0, minf=173 00:41:28.872 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:28.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.872 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.872 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:28.872 00:41:28.872 Run status group 0 (all jobs): 00:41:28.872 READ: bw=1144KiB/s (1171kB/s), 384KiB/s-760KiB/s (393kB/s-778kB/s), io=11.2MiB (11.7MB), run=10001-10001msec 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 00:41:29.181 real 0m11.523s 00:41:29.181 user 0m20.502s 00:41:29.181 sys 0m1.263s 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 ************************************ 00:41:29.181 END TEST fio_dif_1_multi_subsystems 00:41:29.181 ************************************ 00:41:29.181 14:57:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:29.181 14:57:21 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:29.181 14:57:21 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 ************************************ 00:41:29.181 START TEST fio_dif_rand_params 00:41:29.181 ************************************ 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 bdev_null0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.181 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.182 [2024-11-02 14:57:21.184678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:29.182 { 00:41:29.182 "params": { 00:41:29.182 "name": "Nvme$subsystem", 00:41:29.182 "trtype": "$TEST_TRANSPORT", 00:41:29.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.182 "adrfam": "ipv4", 00:41:29.182 "trsvcid": "$NVMF_PORT", 00:41:29.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.182 "hdgst": ${hdgst:-false}, 00:41:29.182 "ddgst": ${ddgst:-false} 00:41:29.182 }, 00:41:29.182 "method": "bdev_nvme_attach_controller" 00:41:29.182 } 00:41:29.182 EOF 00:41:29.182 )") 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:29.182 "params": { 00:41:29.182 "name": "Nvme0", 00:41:29.182 "trtype": "tcp", 00:41:29.182 "traddr": "10.0.0.2", 00:41:29.182 "adrfam": "ipv4", 00:41:29.182 "trsvcid": "4420", 00:41:29.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.182 "hdgst": false, 00:41:29.182 "ddgst": false 00:41:29.182 }, 00:41:29.182 "method": "bdev_nvme_attach_controller" 00:41:29.182 }' 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:29.182 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:29.440 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:29.440 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:29.440 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:29.440 14:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.440 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:29.440 ... 00:41:29.440 fio-3.35 00:41:29.440 Starting 3 threads 00:41:36.006 00:41:36.006 filename0: (groupid=0, jobs=1): err= 0: pid=1600190: Sat Nov 2 14:57:27 2024 00:41:36.006 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(130MiB/5006msec) 00:41:36.006 slat (nsec): min=5777, max=42515, avg=12842.21, stdev=2500.22 00:41:36.006 clat (usec): min=5122, max=91755, avg=14446.85, stdev=12704.04 00:41:36.006 lat (usec): min=5135, max=91767, avg=14459.69, stdev=12703.98 00:41:36.006 clat percentiles (usec): 00:41:36.006 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 8225], 00:41:36.006 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11600], 00:41:36.006 | 70.00th=[12387], 80.00th=[13698], 90.00th=[17695], 95.00th=[50594], 00:41:36.006 | 99.00th=[53740], 99.50th=[54264], 99.90th=[89654], 99.95th=[91751], 00:41:36.006 | 99.99th=[91751] 00:41:36.006 bw ( KiB/s): min=19456, max=36096, per=34.04%, avg=26496.00, stdev=6191.52, samples=10 00:41:36.006 iops : min= 152, max= 282, avg=207.00, stdev=48.37, samples=10 00:41:36.006 lat (msec) : 10=40.85%, 20=49.61%, 50=3.47%, 100=6.07% 00:41:36.006 cpu : usr=93.45%, sys=6.09%, ctx=10, majf=0, minf=99 00:41:36.006 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.006 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.006 filename0: (groupid=0, jobs=1): err= 0: pid=1600191: Sat Nov 2 14:57:27 2024 00:41:36.006 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(129MiB/5044msec) 00:41:36.006 slat (usec): min=4, max=101, avg=13.38, stdev= 4.55 00:41:36.006 clat (usec): min=5281, max=56797, avg=14633.31, stdev=12597.06 00:41:36.006 lat (usec): min=5294, max=56810, avg=14646.69, stdev=12596.85 00:41:36.006 clat percentiles (usec): 00:41:36.006 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 8586], 00:41:36.006 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[11994], 00:41:36.006 | 70.00th=[12911], 80.00th=[13960], 90.00th=[44827], 95.00th=[51643], 00:41:36.006 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[56886], 00:41:36.006 | 99.99th=[56886] 00:41:36.006 bw ( KiB/s): min=18944, max=33536, per=33.78%, avg=26291.20, stdev=5081.53, samples=10 00:41:36.006 iops : min= 148, max= 262, avg=205.40, stdev=39.70, samples=10 00:41:36.006 lat (msec) : 10=40.19%, 20=49.71%, 50=3.30%, 100=6.80% 00:41:36.006 cpu : usr=92.58%, sys=6.96%, ctx=20, majf=0, minf=169 00:41:36.006 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 issued rwts: total=1030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.006 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.006 filename0: (groupid=0, jobs=1): err= 0: pid=1600192: Sat Nov 2 14:57:27 2024 00:41:36.006 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5044msec) 00:41:36.006 slat (nsec): min=4790, max=34059, avg=12288.99, stdev=2191.71 00:41:36.006 clat (usec): min=4924, max=95356, avg=15086.07, stdev=13787.98 00:41:36.006 lat (usec): min=4936, max=95368, avg=15098.36, stdev=13788.11 00:41:36.006 clat percentiles (usec): 00:41:36.006 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 7701], 00:41:36.006 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[11207], 60.00th=[12256], 00:41:36.006 | 70.00th=[13435], 80.00th=[14877], 90.00th=[47973], 95.00th=[51643], 00:41:36.006 | 99.00th=[55837], 99.50th=[58459], 99.90th=[94897], 99.95th=[94897], 00:41:36.006 | 99.99th=[94897] 00:41:36.006 bw ( KiB/s): min=17408, max=39424, per=32.79%, avg=25523.20, stdev=6518.99, samples=10 00:41:36.006 iops : min= 136, max= 308, avg=199.40, stdev=50.93, samples=10 00:41:36.006 lat (msec) : 10=43.54%, 20=45.55%, 50=4.00%, 100=6.91% 00:41:36.006 cpu : usr=92.74%, sys=6.80%, ctx=16, majf=0, minf=105 00:41:36.006 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.006 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.006 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.006 00:41:36.006 Run status group 0 (all jobs): 00:41:36.006 READ: bw=76.0MiB/s (79.7MB/s), 24.8MiB/s-25.9MiB/s (26.0MB/s-27.2MB/s), io=383MiB (402MB), run=5006-5044msec 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:36.006 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 bdev_null0 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 [2024-11-02 14:57:27.317694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 bdev_null1 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 bdev_null2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:36.007 { 00:41:36.007 "params": { 00:41:36.007 "name": "Nvme$subsystem", 00:41:36.007 "trtype": "$TEST_TRANSPORT", 00:41:36.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.007 "adrfam": "ipv4", 00:41:36.007 "trsvcid": "$NVMF_PORT", 00:41:36.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.007 "hdgst": ${hdgst:-false}, 00:41:36.007 "ddgst": ${ddgst:-false} 00:41:36.007 }, 00:41:36.007 "method": "bdev_nvme_attach_controller" 00:41:36.007 } 00:41:36.007 EOF 00:41:36.007 )") 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:36.007 { 00:41:36.007 "params": { 00:41:36.007 "name": "Nvme$subsystem", 00:41:36.007 "trtype": "$TEST_TRANSPORT", 00:41:36.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.007 "adrfam": "ipv4", 00:41:36.007 "trsvcid": "$NVMF_PORT", 00:41:36.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.007 "hdgst": ${hdgst:-false}, 00:41:36.007 "ddgst": ${ddgst:-false} 00:41:36.007 }, 00:41:36.007 "method": "bdev_nvme_attach_controller" 00:41:36.007 } 00:41:36.007 EOF 00:41:36.007 )") 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:36.007 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:36.007 { 00:41:36.007 "params": { 00:41:36.007 "name": "Nvme$subsystem", 00:41:36.007 "trtype": "$TEST_TRANSPORT", 00:41:36.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.007 "adrfam": "ipv4", 00:41:36.007 "trsvcid": "$NVMF_PORT", 00:41:36.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.008 "hdgst": ${hdgst:-false}, 00:41:36.008 "ddgst": ${ddgst:-false} 00:41:36.008 }, 00:41:36.008 "method": "bdev_nvme_attach_controller" 00:41:36.008 } 00:41:36.008 EOF 00:41:36.008 )") 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:36.008 "params": { 00:41:36.008 "name": "Nvme0", 00:41:36.008 "trtype": "tcp", 00:41:36.008 "traddr": "10.0.0.2", 00:41:36.008 "adrfam": "ipv4", 00:41:36.008 "trsvcid": "4420", 00:41:36.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:36.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:36.008 "hdgst": false, 00:41:36.008 "ddgst": false 00:41:36.008 }, 00:41:36.008 "method": "bdev_nvme_attach_controller" 00:41:36.008 },{ 00:41:36.008 "params": { 00:41:36.008 "name": "Nvme1", 00:41:36.008 "trtype": "tcp", 00:41:36.008 "traddr": "10.0.0.2", 00:41:36.008 "adrfam": "ipv4", 00:41:36.008 "trsvcid": "4420", 00:41:36.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:36.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:36.008 "hdgst": false, 00:41:36.008 "ddgst": false 00:41:36.008 }, 00:41:36.008 "method": "bdev_nvme_attach_controller" 00:41:36.008 },{ 00:41:36.008 "params": { 00:41:36.008 "name": "Nvme2", 00:41:36.008 "trtype": "tcp", 00:41:36.008 "traddr": "10.0.0.2", 00:41:36.008 "adrfam": "ipv4", 00:41:36.008 "trsvcid": "4420", 00:41:36.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:36.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:36.008 "hdgst": false, 00:41:36.008 "ddgst": false 00:41:36.008 }, 00:41:36.008 "method": "bdev_nvme_attach_controller" 00:41:36.008 }' 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:36.008 14:57:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.008 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.008 ... 00:41:36.008 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.008 ... 00:41:36.008 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.008 ... 00:41:36.008 fio-3.35 00:41:36.008 Starting 24 threads 00:41:48.218 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601051: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=69, BW=278KiB/s (284kB/s)(2808KiB/10111msec) 00:41:48.218 slat (nsec): min=8104, max=98013, avg=22910.88, stdev=23167.39 00:41:48.218 clat (msec): min=150, max=391, avg=229.99, stdev=32.00 00:41:48.218 lat (msec): min=150, max=391, avg=230.02, stdev=32.01 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 205], 20.00th=[ 211], 00:41:48.218 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 226], 00:41:48.218 | 70.00th=[ 230], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 288], 00:41:48.218 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 393], 99.95th=[ 393], 00:41:48.218 | 99.99th=[ 393] 00:41:48.218 bw ( KiB/s): min= 240, max= 368, per=4.75%, avg=274.40, stdev=40.97, samples=20 00:41:48.218 iops : min= 60, max= 92, avg=68.60, stdev=10.24, samples=20 00:41:48.218 lat (msec) : 250=77.21%, 500=22.79% 00:41:48.218 cpu : usr=98.44%, sys=1.15%, ctx=12, majf=0, minf=20 00:41:48.218 IO depths : 1=0.9%, 2=7.1%, 4=25.1%, 8=55.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601052: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=74, BW=297KiB/s (305kB/s)(3008KiB/10111msec) 00:41:48.218 slat (usec): min=7, max=142, avg=19.41, stdev=19.82 00:41:48.218 clat (msec): min=84, max=274, avg=213.92, stdev=29.92 00:41:48.218 lat (msec): min=84, max=274, avg=213.94, stdev=29.92 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 86], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 199], 00:41:48.218 | 30.00th=[ 209], 40.00th=[ 213], 50.00th=[ 218], 60.00th=[ 220], 00:41:48.218 | 70.00th=[ 228], 80.00th=[ 230], 90.00th=[ 243], 95.00th=[ 264], 00:41:48.218 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:41:48.218 | 99.99th=[ 275] 00:41:48.218 bw ( KiB/s): min= 256, max= 384, per=5.09%, avg=294.40, stdev=55.28, samples=20 00:41:48.218 iops : min= 64, max= 96, avg=73.60, stdev=13.82, samples=20 00:41:48.218 lat (msec) : 100=2.13%, 250=90.16%, 500=7.71% 00:41:48.218 cpu : usr=98.21%, sys=1.24%, ctx=29, majf=0, minf=23 00:41:48.218 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601053: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10099msec) 00:41:48.218 slat (usec): min=8, max=115, avg=41.21, stdev=31.59 00:41:48.218 clat (msec): min=202, max=337, avg=257.39, stdev=41.38 00:41:48.218 lat (msec): min=202, max=337, avg=257.43, stdev=41.40 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 203], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 220], 00:41:48.218 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 264], 00:41:48.218 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 330], 00:41:48.218 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:41:48.218 | 99.99th=[ 338] 00:41:48.218 bw ( KiB/s): min= 144, max= 368, per=4.21%, avg=243.20, stdev=50.22, samples=20 00:41:48.218 iops : min= 36, max= 92, avg=60.80, stdev=12.56, samples=20 00:41:48.218 lat (msec) : 250=54.81%, 500=45.19% 00:41:48.218 cpu : usr=98.11%, sys=1.41%, ctx=22, majf=0, minf=18 00:41:48.218 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601054: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=69, BW=280KiB/s (287kB/s)(2832KiB/10115msec) 00:41:48.218 slat (usec): min=8, max=265, avg=25.12, stdev=33.22 00:41:48.218 clat (msec): min=84, max=383, avg=227.52, stdev=41.78 00:41:48.218 lat (msec): min=84, max=383, avg=227.54, stdev=41.78 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 86], 5.00th=[ 169], 10.00th=[ 199], 20.00th=[ 207], 00:41:48.218 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 228], 00:41:48.218 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 279], 95.00th=[ 292], 00:41:48.218 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:41:48.218 | 99.99th=[ 384] 00:41:48.218 bw ( KiB/s): min= 224, max= 384, per=4.78%, avg=276.80, stdev=41.24, samples=20 00:41:48.218 iops : min= 56, max= 96, avg=69.20, stdev=10.31, samples=20 00:41:48.218 lat (msec) : 100=2.26%, 250=81.07%, 500=16.67% 00:41:48.218 cpu : usr=98.17%, sys=1.27%, ctx=29, majf=0, minf=22 00:41:48.218 IO depths : 1=1.3%, 2=3.5%, 4=12.7%, 8=71.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=90.5%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601055: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10088msec) 00:41:48.218 slat (usec): min=10, max=200, avg=33.26, stdev=22.28 00:41:48.218 clat (msec): min=144, max=421, avg=305.40, stdev=54.12 00:41:48.218 lat (msec): min=144, max=421, avg=305.43, stdev=54.12 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 209], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 264], 00:41:48.218 | 30.00th=[ 279], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.218 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 384], 00:41:48.218 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 422], 00:41:48.218 | 99.99th=[ 422] 00:41:48.218 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=64.34, samples=20 00:41:48.218 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:41:48.218 lat (msec) : 250=19.32%, 500=80.68% 00:41:48.218 cpu : usr=97.38%, sys=1.64%, ctx=129, majf=0, minf=25 00:41:48.218 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601056: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10115msec) 00:41:48.218 slat (usec): min=11, max=147, avg=68.25, stdev=24.69 00:41:48.218 clat (msec): min=85, max=455, avg=296.87, stdev=70.80 00:41:48.218 lat (msec): min=85, max=455, avg=296.93, stdev=70.81 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 86], 5.00th=[ 169], 10.00th=[ 209], 20.00th=[ 220], 00:41:48.218 | 30.00th=[ 279], 40.00th=[ 292], 50.00th=[ 321], 60.00th=[ 326], 00:41:48.218 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 388], 00:41:48.218 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 456], 00:41:48.218 | 99.99th=[ 456] 00:41:48.218 bw ( KiB/s): min= 128, max= 384, per=3.65%, avg=211.20, stdev=72.60, samples=20 00:41:48.218 iops : min= 32, max= 96, avg=52.80, stdev=18.15, samples=20 00:41:48.218 lat (msec) : 100=2.94%, 250=19.49%, 500=77.57% 00:41:48.218 cpu : usr=97.42%, sys=1.60%, ctx=86, majf=0, minf=17 00:41:48.218 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601057: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10085msec) 00:41:48.218 slat (usec): min=8, max=108, avg=31.08, stdev=22.13 00:41:48.218 clat (msec): min=131, max=431, avg=305.36, stdev=52.47 00:41:48.218 lat (msec): min=131, max=431, avg=305.39, stdev=52.46 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 167], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 259], 00:41:48.218 | 30.00th=[ 284], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.218 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 368], 00:41:48.218 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:41:48.218 | 99.99th=[ 430] 00:41:48.218 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=204.80, stdev=76.75, samples=20 00:41:48.218 iops : min= 32, max= 96, avg=51.20, stdev=19.19, samples=20 00:41:48.218 lat (msec) : 250=15.53%, 500=84.47% 00:41:48.218 cpu : usr=98.35%, sys=1.21%, ctx=25, majf=0, minf=35 00:41:48.218 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename0: (groupid=0, jobs=1): err= 0: pid=1601058: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10090msec) 00:41:48.218 slat (nsec): min=8393, max=96886, avg=20940.12, stdev=10087.16 00:41:48.218 clat (msec): min=180, max=376, avg=305.55, stdev=52.99 00:41:48.218 lat (msec): min=180, max=376, avg=305.57, stdev=52.99 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 180], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 264], 00:41:48.218 | 30.00th=[ 279], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.218 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 372], 95.00th=[ 372], 00:41:48.218 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:48.218 | 99.99th=[ 376] 00:41:48.218 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=64.34, samples=20 00:41:48.218 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:41:48.218 lat (msec) : 250=18.18%, 500=81.82% 00:41:48.218 cpu : usr=97.10%, sys=1.74%, ctx=60, majf=0, minf=20 00:41:48.218 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename1: (groupid=0, jobs=1): err= 0: pid=1601059: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=69, BW=279KiB/s (285kB/s)(2816KiB/10111msec) 00:41:48.218 slat (nsec): min=8012, max=99578, avg=22278.35, stdev=22134.04 00:41:48.218 clat (msec): min=127, max=400, avg=228.50, stdev=31.77 00:41:48.218 lat (msec): min=127, max=400, avg=228.52, stdev=31.78 00:41:48.218 clat percentiles (msec): 00:41:48.218 | 1.00th=[ 128], 5.00th=[ 197], 10.00th=[ 203], 20.00th=[ 211], 00:41:48.218 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 228], 00:41:48.218 | 70.00th=[ 232], 80.00th=[ 253], 90.00th=[ 266], 95.00th=[ 288], 00:41:48.218 | 99.00th=[ 313], 99.50th=[ 376], 99.90th=[ 401], 99.95th=[ 401], 00:41:48.218 | 99.99th=[ 401] 00:41:48.218 bw ( KiB/s): min= 128, max= 384, per=4.76%, avg=275.20, stdev=57.95, samples=20 00:41:48.218 iops : min= 32, max= 96, avg=68.80, stdev=14.49, samples=20 00:41:48.218 lat (msec) : 250=78.69%, 500=21.31% 00:41:48.218 cpu : usr=97.87%, sys=1.38%, ctx=36, majf=0, minf=28 00:41:48.218 IO depths : 1=1.1%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:41:48.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.218 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.218 filename1: (groupid=0, jobs=1): err= 0: pid=1601060: Sat Nov 2 14:57:38 2024 00:41:48.218 read: IOPS=78, BW=316KiB/s (323kB/s)(3200KiB/10130msec) 00:41:48.219 slat (nsec): min=4215, max=87430, avg=16274.27, stdev=16101.51 00:41:48.219 clat (usec): min=1792, max=282746, avg=202052.59, stdev=59270.80 00:41:48.219 lat (usec): min=1817, max=282755, avg=202068.87, stdev=59267.12 00:41:48.219 clat percentiles (usec): 00:41:48.219 | 1.00th=[ 1958], 5.00th=[ 39584], 10.00th=[107480], 20.00th=[196084], 00:41:48.219 | 30.00th=[206570], 40.00th=[212861], 50.00th=[217056], 60.00th=[219153], 00:41:48.219 | 70.00th=[227541], 80.00th=[231736], 90.00th=[250610], 95.00th=[263193], 00:41:48.219 | 99.00th=[278922], 99.50th=[283116], 99.90th=[283116], 99.95th=[283116], 00:41:48.219 | 99.99th=[283116] 00:41:48.219 bw ( KiB/s): min= 256, max= 766, per=5.42%, avg=313.50, stdev=118.93, samples=20 00:41:48.219 iops : min= 64, max= 191, avg=78.35, stdev=29.63, samples=20 00:41:48.219 lat (msec) : 2=1.12%, 4=0.88%, 10=2.00%, 50=3.75%, 100=0.50% 00:41:48.219 lat (msec) : 250=81.50%, 500=10.25% 00:41:48.219 cpu : usr=98.37%, sys=1.18%, ctx=29, majf=0, minf=28 00:41:48.219 IO depths : 1=2.5%, 2=8.5%, 4=24.0%, 8=55.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601061: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10106msec) 00:41:48.219 slat (nsec): min=6235, max=98535, avg=36274.41, stdev=17473.54 00:41:48.219 clat (msec): min=143, max=388, avg=305.91, stdev=50.76 00:41:48.219 lat (msec): min=143, max=388, avg=305.95, stdev=50.75 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 211], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 264], 00:41:48.219 | 30.00th=[ 288], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.219 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 384], 00:41:48.219 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:41:48.219 | 99.99th=[ 388] 00:41:48.219 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=204.80, stdev=76.58, samples=20 00:41:48.219 iops : min= 32, max= 96, avg=51.20, stdev=19.14, samples=20 00:41:48.219 lat (msec) : 250=17.80%, 500=82.20% 00:41:48.219 cpu : usr=98.39%, sys=1.11%, ctx=29, majf=0, minf=20 00:41:48.219 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601062: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10111msec) 00:41:48.219 slat (usec): min=8, max=102, avg=40.23, stdev=31.67 00:41:48.219 clat (msec): min=123, max=451, avg=255.04, stdev=49.92 00:41:48.219 lat (msec): min=123, max=451, avg=255.08, stdev=49.94 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 209], 20.00th=[ 215], 00:41:48.219 | 30.00th=[ 224], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:41:48.219 | 70.00th=[ 284], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 326], 00:41:48.219 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:41:48.219 | 99.99th=[ 451] 00:41:48.219 bw ( KiB/s): min= 128, max= 368, per=4.24%, avg=245.60, stdev=55.49, samples=20 00:41:48.219 iops : min= 32, max= 92, avg=61.40, stdev=13.87, samples=20 00:41:48.219 lat (msec) : 250=62.54%, 500=37.46% 00:41:48.219 cpu : usr=98.06%, sys=1.27%, ctx=27, majf=0, minf=28 00:41:48.219 IO depths : 1=1.7%, 2=5.9%, 4=18.6%, 8=63.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601063: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10090msec) 00:41:48.219 slat (usec): min=8, max=300, avg=50.63, stdev=30.46 00:41:48.219 clat (msec): min=180, max=496, avg=305.28, stdev=59.12 00:41:48.219 lat (msec): min=180, max=496, avg=305.33, stdev=59.11 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 180], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 239], 00:41:48.219 | 30.00th=[ 275], 40.00th=[ 309], 50.00th=[ 321], 60.00th=[ 330], 00:41:48.219 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 372], 95.00th=[ 376], 00:41:48.219 | 99.00th=[ 468], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 498], 00:41:48.219 | 99.99th=[ 498] 00:41:48.219 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.75, stdev=64.29, samples=20 00:41:48.219 iops : min= 32, max= 64, avg=51.15, stdev=16.04, samples=20 00:41:48.219 lat (msec) : 250=20.45%, 500=79.55% 00:41:48.219 cpu : usr=98.31%, sys=1.17%, ctx=36, majf=0, minf=17 00:41:48.219 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601064: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=73, BW=293KiB/s (300kB/s)(2944KiB/10055msec) 00:41:48.219 slat (nsec): min=8252, max=98531, avg=17732.02, stdev=17944.14 00:41:48.219 clat (msec): min=163, max=270, avg=218.41, stdev=22.15 00:41:48.219 lat (msec): min=163, max=270, avg=218.43, stdev=22.15 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 203], 00:41:48.219 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 218], 60.00th=[ 222], 00:41:48.219 | 70.00th=[ 228], 80.00th=[ 232], 90.00th=[ 251], 95.00th=[ 264], 00:41:48.219 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:41:48.219 | 99.99th=[ 271] 00:41:48.219 bw ( KiB/s): min= 256, max= 384, per=4.99%, avg=288.00, stdev=56.87, samples=20 00:41:48.219 iops : min= 64, max= 96, avg=72.00, stdev=14.22, samples=20 00:41:48.219 lat (msec) : 250=89.13%, 500=10.87% 00:41:48.219 cpu : usr=97.38%, sys=1.74%, ctx=64, majf=0, minf=25 00:41:48.219 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601065: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10090msec) 00:41:48.219 slat (nsec): min=6494, max=90605, avg=27099.21, stdev=11242.17 00:41:48.219 clat (msec): min=211, max=454, avg=305.44, stdev=54.05 00:41:48.219 lat (msec): min=211, max=454, avg=305.47, stdev=54.05 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 211], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 264], 00:41:48.219 | 30.00th=[ 279], 40.00th=[ 305], 50.00th=[ 321], 60.00th=[ 330], 00:41:48.219 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 384], 00:41:48.219 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:41:48.219 | 99.99th=[ 456] 00:41:48.219 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=56.53, samples=20 00:41:48.219 iops : min= 32, max= 64, avg=51.20, stdev=14.13, samples=20 00:41:48.219 lat (msec) : 250=18.56%, 500=81.44% 00:41:48.219 cpu : usr=98.36%, sys=1.16%, ctx=15, majf=0, minf=26 00:41:48.219 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename1: (groupid=0, jobs=1): err= 0: pid=1601066: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10101msec) 00:41:48.219 slat (usec): min=19, max=150, avg=75.74, stdev=14.88 00:41:48.219 clat (msec): min=178, max=459, avg=305.43, stdev=54.67 00:41:48.219 lat (msec): min=178, max=459, avg=305.51, stdev=54.67 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 180], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 264], 00:41:48.219 | 30.00th=[ 284], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.219 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 368], 95.00th=[ 372], 00:41:48.219 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 460], 00:41:48.219 | 99.99th=[ 460] 00:41:48.219 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=62.85, samples=20 00:41:48.219 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:48.219 lat (msec) : 250=18.94%, 500=81.06% 00:41:48.219 cpu : usr=96.74%, sys=1.92%, ctx=218, majf=0, minf=28 00:41:48.219 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.219 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.219 filename2: (groupid=0, jobs=1): err= 0: pid=1601067: Sat Nov 2 14:57:38 2024 00:41:48.219 read: IOPS=69, BW=279KiB/s (286kB/s)(2824KiB/10110msec) 00:41:48.219 slat (usec): min=8, max=123, avg=17.33, stdev=16.26 00:41:48.219 clat (msec): min=168, max=376, avg=228.12, stdev=30.10 00:41:48.219 lat (msec): min=168, max=376, avg=228.13, stdev=30.10 00:41:48.219 clat percentiles (msec): 00:41:48.219 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 209], 00:41:48.219 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:41:48.219 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 268], 95.00th=[ 288], 00:41:48.219 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:48.219 | 99.99th=[ 376] 00:41:48.219 bw ( KiB/s): min= 144, max= 384, per=4.78%, avg=276.00, stdev=57.31, samples=20 00:41:48.219 iops : min= 36, max= 96, avg=69.00, stdev=14.33, samples=20 00:41:48.219 lat (msec) : 250=84.84%, 500=15.16% 00:41:48.219 cpu : usr=97.62%, sys=1.46%, ctx=29, majf=0, minf=31 00:41:48.220 IO depths : 1=1.1%, 2=5.5%, 4=19.3%, 8=62.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601068: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=64, BW=258KiB/s (264kB/s)(2608KiB/10111msec) 00:41:48.220 slat (usec): min=8, max=244, avg=40.82, stdev=33.84 00:41:48.220 clat (msec): min=115, max=446, avg=246.99, stdev=61.15 00:41:48.220 lat (msec): min=115, max=446, avg=247.03, stdev=61.16 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 116], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 201], 00:41:48.220 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 241], 00:41:48.220 | 70.00th=[ 271], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 338], 00:41:48.220 | 99.00th=[ 405], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:41:48.220 | 99.99th=[ 447] 00:41:48.220 bw ( KiB/s): min= 128, max= 384, per=4.40%, avg=254.40, stdev=65.23, samples=20 00:41:48.220 iops : min= 32, max= 96, avg=63.60, stdev=16.31, samples=20 00:41:48.220 lat (msec) : 250=66.56%, 500=33.44% 00:41:48.220 cpu : usr=97.58%, sys=1.51%, ctx=62, majf=0, minf=27 00:41:48.220 IO depths : 1=2.9%, 2=7.2%, 4=19.5%, 8=60.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601069: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=57, BW=230KiB/s (236kB/s)(2328KiB/10111msec) 00:41:48.220 slat (usec): min=9, max=108, avg=59.40, stdev=24.44 00:41:48.220 clat (msec): min=85, max=471, avg=276.05, stdev=69.74 00:41:48.220 lat (msec): min=85, max=471, avg=276.11, stdev=69.75 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 86], 5.00th=[ 169], 10.00th=[ 203], 20.00th=[ 211], 00:41:48.220 | 30.00th=[ 222], 40.00th=[ 262], 50.00th=[ 288], 60.00th=[ 317], 00:41:48.220 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 368], 95.00th=[ 372], 00:41:48.220 | 99.00th=[ 388], 99.50th=[ 430], 99.90th=[ 472], 99.95th=[ 472], 00:41:48.220 | 99.99th=[ 472] 00:41:48.220 bw ( KiB/s): min= 128, max= 384, per=3.91%, avg=226.40, stdev=65.72, samples=20 00:41:48.220 iops : min= 32, max= 96, avg=56.60, stdev=16.43, samples=20 00:41:48.220 lat (msec) : 100=2.75%, 250=35.74%, 500=61.51% 00:41:48.220 cpu : usr=98.46%, sys=1.11%, ctx=8, majf=0, minf=33 00:41:48.220 IO depths : 1=4.0%, 2=10.0%, 4=24.2%, 8=53.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601070: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10087msec) 00:41:48.220 slat (nsec): min=8920, max=52119, avg=22147.04, stdev=9370.52 00:41:48.220 clat (msec): min=177, max=440, avg=305.46, stdev=55.62 00:41:48.220 lat (msec): min=177, max=440, avg=305.48, stdev=55.61 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 178], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 264], 00:41:48.220 | 30.00th=[ 279], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 334], 00:41:48.220 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 372], 95.00th=[ 372], 00:41:48.220 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 443], 00:41:48.220 | 99.99th=[ 443] 00:41:48.220 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=64.34, samples=20 00:41:48.220 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:41:48.220 lat (msec) : 250=19.32%, 500=80.68% 00:41:48.220 cpu : usr=98.29%, sys=1.26%, ctx=24, majf=0, minf=19 00:41:48.220 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601071: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10111msec) 00:41:48.220 slat (nsec): min=4332, max=57389, avg=28898.63, stdev=8248.53 00:41:48.220 clat (msec): min=202, max=445, avg=306.03, stdev=52.20 00:41:48.220 lat (msec): min=202, max=445, avg=306.06, stdev=52.20 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 203], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 264], 00:41:48.220 | 30.00th=[ 284], 40.00th=[ 296], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.220 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 384], 00:41:48.220 | 99.00th=[ 388], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:41:48.220 | 99.99th=[ 447] 00:41:48.220 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.80, stdev=62.85, samples=20 00:41:48.220 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:48.220 lat (msec) : 250=15.15%, 500=84.85% 00:41:48.220 cpu : usr=97.83%, sys=1.50%, ctx=43, majf=0, minf=22 00:41:48.220 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601072: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10111msec) 00:41:48.220 slat (nsec): min=8138, max=60495, avg=15500.39, stdev=9232.93 00:41:48.220 clat (msec): min=150, max=372, avg=233.48, stdev=36.84 00:41:48.220 lat (msec): min=150, max=372, avg=233.49, stdev=36.84 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 159], 5.00th=[ 184], 10.00th=[ 197], 20.00th=[ 207], 00:41:48.220 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 234], 00:41:48.220 | 70.00th=[ 247], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 288], 00:41:48.220 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:41:48.220 | 99.99th=[ 372] 00:41:48.220 bw ( KiB/s): min= 224, max= 336, per=4.71%, avg=272.80, stdev=28.66, samples=20 00:41:48.220 iops : min= 56, max= 84, avg=68.20, stdev= 7.16, samples=20 00:41:48.220 lat (msec) : 250=76.74%, 500=23.26% 00:41:48.220 cpu : usr=98.44%, sys=1.19%, ctx=14, majf=0, minf=20 00:41:48.220 IO depths : 1=1.0%, 2=3.1%, 4=11.9%, 8=72.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601073: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10090msec) 00:41:48.220 slat (usec): min=16, max=111, avg=75.87, stdev=14.78 00:41:48.220 clat (msec): min=178, max=484, avg=305.11, stdev=58.29 00:41:48.220 lat (msec): min=178, max=484, avg=305.18, stdev=58.30 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 180], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 239], 00:41:48.220 | 30.00th=[ 275], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.220 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 372], 95.00th=[ 372], 00:41:48.220 | 99.00th=[ 456], 99.50th=[ 485], 99.90th=[ 485], 99.95th=[ 485], 00:41:48.220 | 99.99th=[ 485] 00:41:48.220 bw ( KiB/s): min= 128, max= 272, per=3.53%, avg=204.80, stdev=63.07, samples=20 00:41:48.220 iops : min= 32, max= 68, avg=51.20, stdev=15.77, samples=20 00:41:48.220 lat (msec) : 250=20.45%, 500=79.55% 00:41:48.220 cpu : usr=97.72%, sys=1.40%, ctx=71, majf=0, minf=25 00:41:48.220 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 filename2: (groupid=0, jobs=1): err= 0: pid=1601074: Sat Nov 2 14:57:38 2024 00:41:48.220 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10087msec) 00:41:48.220 slat (nsec): min=8293, max=84917, avg=26916.86, stdev=22895.75 00:41:48.220 clat (msec): min=132, max=395, avg=313.40, stdev=49.25 00:41:48.220 lat (msec): min=132, max=395, avg=313.42, stdev=49.23 00:41:48.220 clat percentiles (msec): 00:41:48.220 | 1.00th=[ 207], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 284], 00:41:48.220 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 330], 00:41:48.220 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 368], 95.00th=[ 376], 00:41:48.220 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 397], 00:41:48.220 | 99.99th=[ 397] 00:41:48.220 bw ( KiB/s): min= 128, max= 384, per=3.43%, avg=198.40, stdev=77.42, samples=20 00:41:48.220 iops : min= 32, max= 96, avg=49.60, stdev=19.35, samples=20 00:41:48.220 lat (msec) : 250=12.11%, 500=87.89% 00:41:48.220 cpu : usr=98.42%, sys=1.13%, ctx=17, majf=0, minf=22 00:41:48.220 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.220 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.220 00:41:48.220 Run status group 0 (all jobs): 00:41:48.220 READ: bw=5773KiB/s (5912kB/s), 203KiB/s-316KiB/s (208kB/s-323kB/s), io=57.1MiB (59.9MB), run=10055-10130msec 00:41:48.220 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:48.220 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:48.220 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.220 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 bdev_null0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 [2024-11-02 14:57:39.201723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 bdev_null1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:48.221 { 00:41:48.221 "params": { 00:41:48.221 "name": "Nvme$subsystem", 00:41:48.221 "trtype": "$TEST_TRANSPORT", 00:41:48.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.221 "adrfam": "ipv4", 00:41:48.221 "trsvcid": "$NVMF_PORT", 00:41:48.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.221 "hdgst": ${hdgst:-false}, 00:41:48.221 "ddgst": ${ddgst:-false} 00:41:48.221 }, 00:41:48.221 "method": "bdev_nvme_attach_controller" 00:41:48.221 } 00:41:48.221 EOF 00:41:48.221 )") 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:48.221 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:48.221 { 00:41:48.221 "params": { 00:41:48.221 "name": "Nvme$subsystem", 00:41:48.221 "trtype": "$TEST_TRANSPORT", 00:41:48.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.221 "adrfam": "ipv4", 00:41:48.221 "trsvcid": "$NVMF_PORT", 00:41:48.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.221 "hdgst": ${hdgst:-false}, 00:41:48.221 "ddgst": ${ddgst:-false} 00:41:48.222 }, 00:41:48.222 "method": "bdev_nvme_attach_controller" 00:41:48.222 } 00:41:48.222 EOF 00:41:48.222 )") 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:48.222 "params": { 00:41:48.222 "name": "Nvme0", 00:41:48.222 "trtype": "tcp", 00:41:48.222 "traddr": "10.0.0.2", 00:41:48.222 "adrfam": "ipv4", 00:41:48.222 "trsvcid": "4420", 00:41:48.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:48.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:48.222 "hdgst": false, 00:41:48.222 "ddgst": false 00:41:48.222 }, 00:41:48.222 "method": "bdev_nvme_attach_controller" 00:41:48.222 },{ 00:41:48.222 "params": { 00:41:48.222 "name": "Nvme1", 00:41:48.222 "trtype": "tcp", 00:41:48.222 "traddr": "10.0.0.2", 00:41:48.222 "adrfam": "ipv4", 00:41:48.222 "trsvcid": "4420", 00:41:48.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:48.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:48.222 "hdgst": false, 00:41:48.222 "ddgst": false 00:41:48.222 }, 00:41:48.222 "method": "bdev_nvme_attach_controller" 00:41:48.222 }' 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:48.222 14:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.222 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.222 ... 00:41:48.222 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.222 ... 00:41:48.222 fio-3.35 00:41:48.222 Starting 4 threads 00:41:53.505 00:41:53.505 filename0: (groupid=0, jobs=1): err= 0: pid=1602573: Sat Nov 2 14:57:45 2024 00:41:53.505 read: IOPS=1872, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5004msec) 00:41:53.505 slat (nsec): min=6574, max=66009, avg=13020.19, stdev=6863.87 00:41:53.505 clat (usec): min=817, max=7764, avg=4229.78, stdev=773.01 00:41:53.505 lat (usec): min=834, max=7817, avg=4242.80, stdev=773.10 00:41:53.505 clat percentiles (usec): 00:41:53.505 | 1.00th=[ 2606], 5.00th=[ 3130], 10.00th=[ 3425], 20.00th=[ 3752], 00:41:53.505 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:53.505 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5080], 95.00th=[ 6063], 00:41:53.505 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7504], 00:41:53.505 | 99.99th=[ 7767] 00:41:53.505 bw ( KiB/s): min=13776, max=16080, per=25.47%, avg=14985.60, stdev=882.64, samples=10 00:41:53.505 iops : min= 1722, max= 2010, avg=1873.20, stdev=110.33, samples=10 00:41:53.505 lat (usec) : 1000=0.01% 00:41:53.506 lat (msec) : 2=0.10%, 4=34.03%, 10=65.86% 00:41:53.506 cpu : usr=94.40%, sys=5.10%, ctx=11, majf=0, minf=9 00:41:53.506 IO depths : 1=0.1%, 2=5.2%, 4=67.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 issued rwts: total=9371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:53.506 filename0: (groupid=0, jobs=1): err= 0: pid=1602574: Sat Nov 2 14:57:45 2024 00:41:53.506 read: IOPS=1814, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5002msec) 00:41:53.506 slat (nsec): min=5497, max=58528, avg=15872.19, stdev=7716.81 00:41:53.506 clat (usec): min=861, max=8173, avg=4358.35, stdev=746.67 00:41:53.506 lat (usec): min=892, max=8194, avg=4374.23, stdev=745.57 00:41:53.506 clat percentiles (usec): 00:41:53.506 | 1.00th=[ 2868], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3884], 00:41:53.506 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:53.506 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5473], 95.00th=[ 6259], 00:41:53.506 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7898], 00:41:53.506 | 99.99th=[ 8160] 00:41:53.506 bw ( KiB/s): min=13632, max=15200, per=24.70%, avg=14535.11, stdev=561.83, samples=9 00:41:53.506 iops : min= 1704, max= 1900, avg=1816.89, stdev=70.23, samples=9 00:41:53.506 lat (usec) : 1000=0.01% 00:41:53.506 lat (msec) : 2=0.03%, 4=26.68%, 10=73.28% 00:41:53.506 cpu : usr=95.72%, sys=3.76%, ctx=10, majf=0, minf=9 00:41:53.506 IO depths : 1=0.1%, 2=4.9%, 4=67.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 issued rwts: total=9075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:53.506 filename1: (groupid=0, jobs=1): err= 0: pid=1602575: Sat Nov 2 14:57:45 2024 00:41:53.506 read: IOPS=1829, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:41:53.506 slat (nsec): min=5099, max=63959, avg=13780.63, stdev=7388.81 00:41:53.506 clat (usec): min=1637, max=9375, avg=4329.73, stdev=647.97 00:41:53.506 lat (usec): min=1657, max=9390, avg=4343.51, stdev=647.83 00:41:53.506 clat percentiles (usec): 00:41:53.506 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3916], 00:41:53.506 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4359], 00:41:53.506 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5604], 00:41:53.506 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 9372], 00:41:53.506 | 99.99th=[ 9372] 00:41:53.506 bw ( KiB/s): min=13723, max=15760, per=25.01%, avg=14715.89, stdev=643.50, samples=9 00:41:53.506 iops : min= 1715, max= 1970, avg=1839.44, stdev=80.51, samples=9 00:41:53.506 lat (msec) : 2=0.05%, 4=24.89%, 10=75.05% 00:41:53.506 cpu : usr=94.48%, sys=4.98%, ctx=9, majf=0, minf=9 00:41:53.506 IO depths : 1=0.2%, 2=4.8%, 4=67.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 issued rwts: total=9148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:53.506 filename1: (groupid=0, jobs=1): err= 0: pid=1602576: Sat Nov 2 14:57:45 2024 00:41:53.506 read: IOPS=1840, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5003msec) 00:41:53.506 slat (nsec): min=5738, max=68485, avg=12944.88, stdev=7058.03 00:41:53.506 clat (usec): min=878, max=8089, avg=4304.55, stdev=628.24 00:41:53.506 lat (usec): min=895, max=8104, avg=4317.49, stdev=628.20 00:41:53.506 clat percentiles (usec): 00:41:53.506 | 1.00th=[ 2999], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3916], 00:41:53.506 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:53.506 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5669], 00:41:53.506 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7832], 99.95th=[ 8029], 00:41:53.506 | 99.99th=[ 8094] 00:41:53.506 bw ( KiB/s): min=14160, max=15168, per=25.02%, avg=14724.40, stdev=338.35, samples=10 00:41:53.506 iops : min= 1770, max= 1896, avg=1840.50, stdev=42.36, samples=10 00:41:53.506 lat (usec) : 1000=0.01% 00:41:53.506 lat (msec) : 2=0.01%, 4=25.52%, 10=74.46% 00:41:53.506 cpu : usr=94.26%, sys=5.20%, ctx=9, majf=0, minf=10 00:41:53.506 IO depths : 1=0.2%, 2=4.1%, 4=68.7%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.506 issued rwts: total=9209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:53.506 00:41:53.506 Run status group 0 (all jobs): 00:41:53.506 READ: bw=57.5MiB/s (60.2MB/s), 14.2MiB/s-14.6MiB/s (14.9MB/s-15.3MB/s), io=288MiB (301MB), run=5001-5004msec 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 00:41:53.765 real 0m24.510s 00:41:53.765 user 4m35.131s 00:41:53.765 sys 0m6.269s 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 ************************************ 00:41:53.765 END TEST fio_dif_rand_params 00:41:53.765 ************************************ 00:41:53.765 14:57:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:53.765 14:57:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:53.765 14:57:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 ************************************ 00:41:53.765 START TEST fio_dif_digest 00:41:53.765 ************************************ 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 bdev_null0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 [2024-11-02 14:57:45.738932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:53.765 { 00:41:53.765 "params": { 00:41:53.765 "name": "Nvme$subsystem", 00:41:53.765 "trtype": "$TEST_TRANSPORT", 00:41:53.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:53.765 "adrfam": "ipv4", 00:41:53.765 "trsvcid": "$NVMF_PORT", 00:41:53.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:53.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:53.765 "hdgst": ${hdgst:-false}, 00:41:53.765 "ddgst": ${ddgst:-false} 00:41:53.765 }, 00:41:53.765 "method": "bdev_nvme_attach_controller" 00:41:53.765 } 00:41:53.765 EOF 00:41:53.765 )") 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:41:53.765 14:57:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:53.766 "params": { 00:41:53.766 "name": "Nvme0", 00:41:53.766 "trtype": "tcp", 00:41:53.766 "traddr": "10.0.0.2", 00:41:53.766 "adrfam": "ipv4", 00:41:53.766 "trsvcid": "4420", 00:41:53.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:53.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:53.766 "hdgst": true, 00:41:53.766 "ddgst": true 00:41:53.766 }, 00:41:53.766 "method": "bdev_nvme_attach_controller" 00:41:53.766 }' 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:53.766 14:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.024 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:54.024 ... 00:41:54.024 fio-3.35 00:41:54.024 Starting 3 threads 00:42:06.227 00:42:06.227 filename0: (groupid=0, jobs=1): err= 0: pid=1603326: Sat Nov 2 14:57:56 2024 00:42:06.227 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10047msec) 00:42:06.227 slat (usec): min=7, max=223, avg=18.95, stdev= 6.28 00:42:06.227 clat (usec): min=10993, max=53433, avg=14106.61, stdev=1541.82 00:42:06.227 lat (usec): min=11012, max=53453, avg=14125.56, stdev=1541.84 00:42:06.227 clat percentiles (usec): 00:42:06.227 | 1.00th=[11600], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:42:06.227 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:42:06.227 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:42:06.227 | 99.00th=[16712], 99.50th=[17433], 99.90th=[19268], 99.95th=[49021], 00:42:06.227 | 99.99th=[53216] 00:42:06.227 bw ( KiB/s): min=26112, max=28416, per=33.83%, avg=27235.65, stdev=601.85, samples=20 00:42:06.227 iops : min= 204, max= 222, avg=212.75, stdev= 4.71, samples=20 00:42:06.227 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:42:06.227 cpu : usr=94.06%, sys=5.48%, ctx=20, majf=0, minf=159 00:42:06.227 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.227 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.227 filename0: (groupid=0, jobs=1): err= 0: pid=1603327: Sat Nov 2 14:57:56 2024 00:42:06.227 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10045msec) 00:42:06.228 slat (nsec): min=8347, max=80104, avg=20334.33, stdev=5968.64 00:42:06.228 clat (usec): min=10311, max=56445, avg=14420.58, stdev=1546.36 00:42:06.228 lat (usec): min=10327, max=56462, avg=14440.91, stdev=1546.26 00:42:06.228 clat percentiles (usec): 00:42:06.228 | 1.00th=[11994], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:42:06.228 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:42:06.228 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:42:06.228 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20317], 99.95th=[46400], 00:42:06.228 | 99.99th=[56361] 00:42:06.228 bw ( KiB/s): min=25344, max=27392, per=33.09%, avg=26636.80, stdev=458.51, samples=20 00:42:06.228 iops : min= 198, max= 214, avg=208.10, stdev= 3.58, samples=20 00:42:06.228 lat (msec) : 20=99.86%, 50=0.10%, 100=0.05% 00:42:06.228 cpu : usr=90.16%, sys=6.89%, ctx=690, majf=0, minf=125 00:42:06.228 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.228 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.228 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.228 filename0: (groupid=0, jobs=1): err= 0: pid=1603328: Sat Nov 2 14:57:56 2024 00:42:06.228 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10045msec) 00:42:06.228 slat (nsec): min=8119, max=70959, avg=18306.60, stdev=4175.92 00:42:06.228 clat (usec): min=11306, max=51667, avg=14265.17, stdev=1496.23 00:42:06.228 lat (usec): min=11324, max=51687, avg=14283.48, stdev=1496.22 00:42:06.228 clat percentiles (usec): 00:42:06.228 | 1.00th=[11994], 5.00th=[12649], 10.00th=[12911], 20.00th=[13304], 00:42:06.228 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:42:06.228 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:42:06.228 | 99.00th=[16909], 99.50th=[17171], 99.90th=[19792], 99.95th=[47449], 00:42:06.228 | 99.99th=[51643] 00:42:06.228 bw ( KiB/s): min=26112, max=27904, per=33.45%, avg=26931.20, stdev=474.23, samples=20 00:42:06.228 iops : min= 204, max= 218, avg=210.40, stdev= 3.70, samples=20 00:42:06.228 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:42:06.228 cpu : usr=95.05%, sys=4.38%, ctx=36, majf=0, minf=163 00:42:06.228 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.228 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.228 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.228 00:42:06.228 Run status group 0 (all jobs): 00:42:06.228 READ: bw=78.6MiB/s (82.4MB/s), 25.9MiB/s-26.5MiB/s (27.2MB/s-27.8MB/s), io=790MiB (828MB), run=10045-10047msec 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.228 00:42:06.228 real 0m11.039s 00:42:06.228 user 0m29.130s 00:42:06.228 sys 0m1.955s 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:06.228 14:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.228 ************************************ 00:42:06.228 END TEST fio_dif_digest 00:42:06.228 ************************************ 00:42:06.228 14:57:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:06.228 14:57:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:06.228 rmmod nvme_tcp 00:42:06.228 rmmod nvme_fabrics 00:42:06.228 rmmod nvme_keyring 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 1596542 ']' 00:42:06.228 14:57:56 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 1596542 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1596542 ']' 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1596542 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596542 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596542' 00:42:06.228 killing process with pid 1596542 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1596542 00:42:06.228 14:57:56 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1596542 00:42:06.228 14:57:57 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:06.228 14:57:57 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:06.228 Waiting for block devices as requested 00:42:06.228 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:06.487 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:06.487 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:06.746 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:06.746 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:06.746 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:06.746 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:07.005 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:07.005 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:07.005 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:07.005 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:07.005 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:07.262 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:07.262 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:07.262 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:07.262 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:07.521 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:07.521 14:57:59 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:07.521 14:57:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:07.521 14:57:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.053 14:58:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:10.053 00:42:10.053 real 1m7.251s 00:42:10.053 user 6m33.003s 00:42:10.053 sys 0m17.263s 00:42:10.053 14:58:01 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:10.053 14:58:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:10.053 ************************************ 00:42:10.053 END TEST nvmf_dif 00:42:10.053 ************************************ 00:42:10.053 14:58:01 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:10.053 14:58:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:10.053 14:58:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:10.053 14:58:01 -- common/autotest_common.sh@10 -- # set +x 00:42:10.053 ************************************ 00:42:10.053 START TEST nvmf_abort_qd_sizes 00:42:10.053 ************************************ 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:10.053 * Looking for test storage... 00:42:10.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:10.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.053 --rc genhtml_branch_coverage=1 00:42:10.053 --rc genhtml_function_coverage=1 00:42:10.053 --rc genhtml_legend=1 00:42:10.053 --rc geninfo_all_blocks=1 00:42:10.053 --rc geninfo_unexecuted_blocks=1 00:42:10.053 00:42:10.053 ' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:10.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.053 --rc genhtml_branch_coverage=1 00:42:10.053 --rc genhtml_function_coverage=1 00:42:10.053 --rc genhtml_legend=1 00:42:10.053 --rc geninfo_all_blocks=1 00:42:10.053 --rc geninfo_unexecuted_blocks=1 00:42:10.053 00:42:10.053 ' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:10.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.053 --rc genhtml_branch_coverage=1 00:42:10.053 --rc genhtml_function_coverage=1 00:42:10.053 --rc genhtml_legend=1 00:42:10.053 --rc geninfo_all_blocks=1 00:42:10.053 --rc geninfo_unexecuted_blocks=1 00:42:10.053 00:42:10.053 ' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:10.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.053 --rc genhtml_branch_coverage=1 00:42:10.053 --rc genhtml_function_coverage=1 00:42:10.053 --rc genhtml_legend=1 00:42:10.053 --rc geninfo_all_blocks=1 00:42:10.053 --rc geninfo_unexecuted_blocks=1 00:42:10.053 00:42:10.053 ' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.053 14:58:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:10.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:10.054 14:58:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:11.956 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:11.956 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:11.956 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:11.956 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:11.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:11.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:11.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:11.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:11.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:11.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:42:11.957 00:42:11.957 --- 10.0.0.2 ping statistics --- 00:42:11.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.957 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:11.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:11.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:42:11.957 00:42:11.957 --- 10.0.0.1 ping statistics --- 00:42:11.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.957 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:42:11.957 14:58:03 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:12.891 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:12.891 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:12.891 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:13.149 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:13.149 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:13.149 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:13.149 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:13.149 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:13.149 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:14.082 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=1608116 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 1608116 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1608116 ']' 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:14.082 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:14.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:14.083 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:14.083 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.341 [2024-11-02 14:58:06.177395] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:14.341 [2024-11-02 14:58:06.177473] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:14.341 [2024-11-02 14:58:06.244403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:14.341 [2024-11-02 14:58:06.333015] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:14.341 [2024-11-02 14:58:06.333083] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:14.341 [2024-11-02 14:58:06.333096] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:14.341 [2024-11-02 14:58:06.333122] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:14.341 [2024-11-02 14:58:06.333132] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:14.341 [2024-11-02 14:58:06.333183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:14.341 [2024-11-02 14:58:06.333281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:14.341 [2024-11-02 14:58:06.333310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:14.341 [2024-11-02 14:58:06.333314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:14.599 14:58:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.599 ************************************ 00:42:14.599 START TEST spdk_target_abort 00:42:14.599 ************************************ 00:42:14.599 14:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:14.599 14:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:14.599 14:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:14.599 14:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:14.599 14:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.878 spdk_targetn1 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.878 [2024-11-02 14:58:09.367052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.878 [2024-11-02 14:58:09.399370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:17.878 14:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:21.154 Initializing NVMe Controllers 00:42:21.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:21.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:21.154 Initialization complete. Launching workers. 00:42:21.154 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10565, failed: 0 00:42:21.154 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 9330 00:42:21.154 success 758, unsuccessful 477, failed 0 00:42:21.154 14:58:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:21.154 14:58:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.431 Initializing NVMe Controllers 00:42:24.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:24.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:24.431 Initialization complete. Launching workers. 00:42:24.431 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8558, failed: 0 00:42:24.431 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 7289 00:42:24.431 success 319, unsuccessful 950, failed 0 00:42:24.431 14:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:24.431 14:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:27.709 Initializing NVMe Controllers 00:42:27.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:27.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:27.709 Initialization complete. Launching workers. 00:42:27.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31193, failed: 0 00:42:27.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2798, failed to submit 28395 00:42:27.709 success 535, unsuccessful 2263, failed 0 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.709 14:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1608116 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1608116 ']' 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1608116 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608116 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608116' 00:42:28.640 killing process with pid 1608116 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1608116 00:42:28.640 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1608116 00:42:28.899 00:42:28.899 real 0m14.265s 00:42:28.899 user 0m53.034s 00:42:28.899 sys 0m2.890s 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:28.899 ************************************ 00:42:28.899 END TEST spdk_target_abort 00:42:28.899 ************************************ 00:42:28.899 14:58:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:28.899 14:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:28.899 14:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:28.899 14:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:28.899 ************************************ 00:42:28.899 START TEST kernel_target_abort 00:42:28.899 ************************************ 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:28.899 14:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:29.833 Waiting for block devices as requested 00:42:29.834 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:30.092 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:30.092 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:30.351 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:30.351 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:30.351 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:30.351 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:30.609 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:30.609 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:30.609 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:30.609 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:30.867 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:30.867 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:30.867 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:30.868 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:31.126 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:31.126 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:31.126 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:42:31.126 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:31.126 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:42:31.126 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:31.126 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:31.385 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:31.386 No valid GPT data, bailing 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:31.386 00:42:31.386 Discovery Log Number of Records 2, Generation counter 2 00:42:31.386 =====Discovery Log Entry 0====== 00:42:31.386 trtype: tcp 00:42:31.386 adrfam: ipv4 00:42:31.386 subtype: current discovery subsystem 00:42:31.386 treq: not specified, sq flow control disable supported 00:42:31.386 portid: 1 00:42:31.386 trsvcid: 4420 00:42:31.386 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:31.386 traddr: 10.0.0.1 00:42:31.386 eflags: none 00:42:31.386 sectype: none 00:42:31.386 =====Discovery Log Entry 1====== 00:42:31.386 trtype: tcp 00:42:31.386 adrfam: ipv4 00:42:31.386 subtype: nvme subsystem 00:42:31.386 treq: not specified, sq flow control disable supported 00:42:31.386 portid: 1 00:42:31.386 trsvcid: 4420 00:42:31.386 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:31.386 traddr: 10.0.0.1 00:42:31.386 eflags: none 00:42:31.386 sectype: none 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:31.386 14:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:34.665 Initializing NVMe Controllers 00:42:34.665 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:34.665 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:34.665 Initialization complete. Launching workers. 00:42:34.665 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33214, failed: 0 00:42:34.665 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33214, failed to submit 0 00:42:34.665 success 0, unsuccessful 33214, failed 0 00:42:34.665 14:58:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:34.665 14:58:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:37.942 Initializing NVMe Controllers 00:42:37.942 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:37.942 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:37.942 Initialization complete. Launching workers. 00:42:37.942 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64357, failed: 0 00:42:37.942 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16226, failed to submit 48131 00:42:37.942 success 0, unsuccessful 16226, failed 0 00:42:37.942 14:58:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:37.942 14:58:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:41.222 Initializing NVMe Controllers 00:42:41.223 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:41.223 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:41.223 Initialization complete. Launching workers. 00:42:41.223 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62508, failed: 0 00:42:41.223 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15626, failed to submit 46882 00:42:41.223 success 0, unsuccessful 15626, failed 0 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:42:41.223 14:58:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:41.790 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:41.790 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:42.050 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:42.050 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:42.986 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:42.986 00:42:42.986 real 0m14.187s 00:42:42.986 user 0m5.266s 00:42:42.986 sys 0m3.350s 00:42:42.986 14:58:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:42.986 14:58:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:42.986 ************************************ 00:42:42.986 END TEST kernel_target_abort 00:42:42.986 ************************************ 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:43.244 rmmod nvme_tcp 00:42:43.244 rmmod nvme_fabrics 00:42:43.244 rmmod nvme_keyring 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 1608116 ']' 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 1608116 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1608116 ']' 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1608116 00:42:43.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1608116) - No such process 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1608116 is not found' 00:42:43.244 Process with pid 1608116 is not found 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:43.244 14:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:44.180 Waiting for block devices as requested 00:42:44.180 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:44.438 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.438 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:44.700 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:44.700 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:44.700 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:44.700 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:44.970 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:44.970 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:44.970 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.970 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:45.250 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:45.250 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:45.250 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:45.250 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:45.524 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:45.524 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:45.524 14:58:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:48.066 14:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:48.066 00:42:48.066 real 0m37.983s 00:42:48.066 user 1m0.493s 00:42:48.066 sys 0m9.697s 00:42:48.066 14:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:48.066 14:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:48.066 ************************************ 00:42:48.066 END TEST nvmf_abort_qd_sizes 00:42:48.066 ************************************ 00:42:48.066 14:58:39 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:48.066 14:58:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:48.066 14:58:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:48.066 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:42:48.066 ************************************ 00:42:48.066 START TEST keyring_file 00:42:48.066 ************************************ 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:48.066 * Looking for test storage... 00:42:48.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.066 --rc genhtml_branch_coverage=1 00:42:48.066 --rc genhtml_function_coverage=1 00:42:48.066 --rc genhtml_legend=1 00:42:48.066 --rc geninfo_all_blocks=1 00:42:48.066 --rc geninfo_unexecuted_blocks=1 00:42:48.066 00:42:48.066 ' 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.066 --rc genhtml_branch_coverage=1 00:42:48.066 --rc genhtml_function_coverage=1 00:42:48.066 --rc genhtml_legend=1 00:42:48.066 --rc geninfo_all_blocks=1 00:42:48.066 --rc geninfo_unexecuted_blocks=1 00:42:48.066 00:42:48.066 ' 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.066 --rc genhtml_branch_coverage=1 00:42:48.066 --rc genhtml_function_coverage=1 00:42:48.066 --rc genhtml_legend=1 00:42:48.066 --rc geninfo_all_blocks=1 00:42:48.066 --rc geninfo_unexecuted_blocks=1 00:42:48.066 00:42:48.066 ' 00:42:48.066 14:58:39 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.066 --rc genhtml_branch_coverage=1 00:42:48.066 --rc genhtml_function_coverage=1 00:42:48.066 --rc genhtml_legend=1 00:42:48.066 --rc geninfo_all_blocks=1 00:42:48.066 --rc geninfo_unexecuted_blocks=1 00:42:48.066 00:42:48.066 ' 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:48.066 14:58:39 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:48.066 14:58:39 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:48.066 14:58:39 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.066 14:58:39 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.066 14:58:39 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.066 14:58:39 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:48.066 14:58:39 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:48.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:48.066 14:58:39 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:48.066 14:58:39 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:48.066 14:58:39 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:48.066 14:58:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pqrkTy5bVt 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pqrkTy5bVt 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pqrkTy5bVt 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pqrkTy5bVt 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Yk2Y8JnJTe 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:48.067 14:58:39 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Yk2Y8JnJTe 00:42:48.067 14:58:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Yk2Y8JnJTe 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Yk2Y8JnJTe 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=1613886 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:48.067 14:58:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1613886 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1613886 ']' 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:48.067 14:58:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.067 [2024-11-02 14:58:39.870947] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:48.067 [2024-11-02 14:58:39.871043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613886 ] 00:42:48.067 [2024-11-02 14:58:39.927656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.067 [2024-11-02 14:58:40.012959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:48.326 14:58:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.326 [2024-11-02 14:58:40.280896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:48.326 null0 00:42:48.326 [2024-11-02 14:58:40.312946] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:48.326 [2024-11-02 14:58:40.313443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:48.326 14:58:40 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.326 [2024-11-02 14:58:40.336981] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:48.326 request: 00:42:48.326 { 00:42:48.326 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:48.326 "secure_channel": false, 00:42:48.326 "listen_address": { 00:42:48.326 "trtype": "tcp", 00:42:48.326 "traddr": "127.0.0.1", 00:42:48.326 "trsvcid": "4420" 00:42:48.326 }, 00:42:48.326 "method": "nvmf_subsystem_add_listener", 00:42:48.326 "req_id": 1 00:42:48.326 } 00:42:48.326 Got JSON-RPC error response 00:42:48.326 response: 00:42:48.326 { 00:42:48.326 "code": -32602, 00:42:48.326 "message": "Invalid parameters" 00:42:48.326 } 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:48.326 14:58:40 keyring_file -- keyring/file.sh@47 -- # bperfpid=1613896 00:42:48.326 14:58:40 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:48.326 14:58:40 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1613896 /var/tmp/bperf.sock 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1613896 ']' 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:48.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:48.326 14:58:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.584 [2024-11-02 14:58:40.389153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:48.584 [2024-11-02 14:58:40.389231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613896 ] 00:42:48.584 [2024-11-02 14:58:40.451428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.584 [2024-11-02 14:58:40.543447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:48.842 14:58:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:48.842 14:58:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:48.842 14:58:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:48.842 14:58:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:49.099 14:58:40 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Yk2Y8JnJTe 00:42:49.099 14:58:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Yk2Y8JnJTe 00:42:49.357 14:58:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:49.357 14:58:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:49.357 14:58:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.357 14:58:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:49.357 14:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.615 14:58:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pqrkTy5bVt == \/\t\m\p\/\t\m\p\.\p\q\r\k\T\y\5\b\V\t ]] 00:42:49.615 14:58:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:49.615 14:58:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:49.615 14:58:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.615 14:58:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:49.615 14:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.873 14:58:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Yk2Y8JnJTe == \/\t\m\p\/\t\m\p\.\Y\k\2\Y\8\J\n\J\T\e ]] 00:42:49.873 14:58:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:49.873 14:58:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:49.873 14:58:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.873 14:58:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.873 14:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.873 14:58:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:50.131 14:58:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:50.131 14:58:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:50.131 14:58:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:50.131 14:58:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.131 14:58:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.131 14:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.131 14:58:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:50.389 14:58:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:50.389 14:58:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:50.389 14:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:50.647 [2024-11-02 14:58:42.603981] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:50.647 nvme0n1 00:42:50.647 14:58:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:50.647 14:58:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:50.647 14:58:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.647 14:58:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.647 14:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.647 14:58:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.213 14:58:42 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:51.213 14:58:42 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:51.213 14:58:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:51.213 14:58:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.213 14:58:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.213 14:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.213 14:58:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.213 14:58:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:51.213 14:58:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:51.471 Running I/O for 1 seconds... 00:42:52.405 5005.00 IOPS, 19.55 MiB/s 00:42:52.405 Latency(us) 00:42:52.405 [2024-11-02T13:58:44.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.405 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:52.405 nvme0n1 : 1.02 5031.82 19.66 0.00 0.00 25191.45 8980.86 36117.62 00:42:52.405 [2024-11-02T13:58:44.460Z] =================================================================================================================== 00:42:52.405 [2024-11-02T13:58:44.460Z] Total : 5031.82 19.66 0.00 0.00 25191.45 8980.86 36117.62 00:42:52.405 { 00:42:52.405 "results": [ 00:42:52.405 { 00:42:52.405 "job": "nvme0n1", 00:42:52.405 "core_mask": "0x2", 00:42:52.405 "workload": "randrw", 00:42:52.405 "percentage": 50, 00:42:52.405 "status": "finished", 00:42:52.405 "queue_depth": 128, 00:42:52.405 "io_size": 4096, 00:42:52.405 "runtime": 1.020506, 00:42:52.405 "iops": 5031.81754933337, 00:42:52.405 "mibps": 19.655537302083477, 00:42:52.405 "io_failed": 0, 00:42:52.405 "io_timeout": 0, 00:42:52.405 "avg_latency_us": 25191.448483825596, 00:42:52.405 "min_latency_us": 8980.85925925926, 00:42:52.405 "max_latency_us": 36117.61777777778 00:42:52.405 } 00:42:52.405 ], 00:42:52.405 "core_count": 1 00:42:52.405 } 00:42:52.405 14:58:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:52.405 14:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:52.663 14:58:44 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:52.663 14:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:52.663 14:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.663 14:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.663 14:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.663 14:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:52.922 14:58:44 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:52.922 14:58:44 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:52.922 14:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:52.922 14:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.922 14:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.922 14:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.922 14:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.180 14:58:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:53.180 14:58:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.180 14:58:45 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.180 14:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:53.438 [2024-11-02 14:58:45.480308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-11-02 14:58:45.480307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165f110 (107)ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:53.438 : Transport endpoint is not connected 00:42:53.438 [2024-11-02 14:58:45.481279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165f110 (9): Bad file descriptor 00:42:53.438 [2024-11-02 14:58:45.482278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:53.438 [2024-11-02 14:58:45.482316] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:53.438 [2024-11-02 14:58:45.482331] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:53.438 [2024-11-02 14:58:45.482346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:53.438 request: 00:42:53.438 { 00:42:53.438 "name": "nvme0", 00:42:53.438 "trtype": "tcp", 00:42:53.438 "traddr": "127.0.0.1", 00:42:53.438 "adrfam": "ipv4", 00:42:53.438 "trsvcid": "4420", 00:42:53.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:53.438 "prchk_reftag": false, 00:42:53.438 "prchk_guard": false, 00:42:53.438 "hdgst": false, 00:42:53.438 "ddgst": false, 00:42:53.438 "psk": "key1", 00:42:53.438 "allow_unrecognized_csi": false, 00:42:53.438 "method": "bdev_nvme_attach_controller", 00:42:53.438 "req_id": 1 00:42:53.438 } 00:42:53.438 Got JSON-RPC error response 00:42:53.438 response: 00:42:53.438 { 00:42:53.438 "code": -5, 00:42:53.438 "message": "Input/output error" 00:42:53.438 } 00:42:53.696 14:58:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:53.697 14:58:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:53.697 14:58:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:53.697 14:58:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:53.697 14:58:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:53.697 14:58:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:53.697 14:58:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.697 14:58:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.697 14:58:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:53.697 14:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.954 14:58:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:53.954 14:58:45 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:53.954 14:58:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:53.954 14:58:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.954 14:58:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.954 14:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.954 14:58:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:54.212 14:58:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:54.212 14:58:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:54.212 14:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:54.470 14:58:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:54.470 14:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:54.728 14:58:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:54.728 14:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.728 14:58:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:54.986 14:58:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:54.986 14:58:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.pqrkTy5bVt 00:42:54.986 14:58:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:54.986 14:58:46 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:54.986 14:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:55.244 [2024-11-02 14:58:47.124205] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pqrkTy5bVt': 0100660 00:42:55.244 [2024-11-02 14:58:47.124264] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:55.244 request: 00:42:55.244 { 00:42:55.244 "name": "key0", 00:42:55.244 "path": "/tmp/tmp.pqrkTy5bVt", 00:42:55.244 "method": "keyring_file_add_key", 00:42:55.244 "req_id": 1 00:42:55.244 } 00:42:55.244 Got JSON-RPC error response 00:42:55.244 response: 00:42:55.244 { 00:42:55.244 "code": -1, 00:42:55.244 "message": "Operation not permitted" 00:42:55.244 } 00:42:55.244 14:58:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:55.244 14:58:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:55.244 14:58:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:55.244 14:58:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:55.244 14:58:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.pqrkTy5bVt 00:42:55.244 14:58:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:55.244 14:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pqrkTy5bVt 00:42:55.502 14:58:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.pqrkTy5bVt 00:42:55.502 14:58:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:55.502 14:58:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:55.502 14:58:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.502 14:58:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.502 14:58:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.502 14:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.760 14:58:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:55.760 14:58:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:55.760 14:58:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.760 14:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.018 [2024-11-02 14:58:47.986538] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pqrkTy5bVt': No such file or directory 00:42:56.018 [2024-11-02 14:58:47.986596] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:56.018 [2024-11-02 14:58:47.986619] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:56.018 [2024-11-02 14:58:47.986650] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:56.018 [2024-11-02 14:58:47.986672] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:56.018 [2024-11-02 14:58:47.986685] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:56.018 request: 00:42:56.018 { 00:42:56.018 "name": "nvme0", 00:42:56.018 "trtype": "tcp", 00:42:56.018 "traddr": "127.0.0.1", 00:42:56.018 "adrfam": "ipv4", 00:42:56.018 "trsvcid": "4420", 00:42:56.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.018 "prchk_reftag": false, 00:42:56.018 "prchk_guard": false, 00:42:56.018 "hdgst": false, 00:42:56.018 "ddgst": false, 00:42:56.018 "psk": "key0", 00:42:56.018 "allow_unrecognized_csi": false, 00:42:56.018 "method": "bdev_nvme_attach_controller", 00:42:56.018 "req_id": 1 00:42:56.018 } 00:42:56.018 Got JSON-RPC error response 00:42:56.018 response: 00:42:56.018 { 00:42:56.018 "code": -19, 00:42:56.018 "message": "No such device" 00:42:56.018 } 00:42:56.018 14:58:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:56.018 14:58:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:56.018 14:58:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:56.018 14:58:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:56.018 14:58:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:56.019 14:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:56.277 14:58:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bAcmDKvRTB 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:56.277 14:58:48 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bAcmDKvRTB 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bAcmDKvRTB 00:42:56.277 14:58:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.bAcmDKvRTB 00:42:56.277 14:58:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bAcmDKvRTB 00:42:56.277 14:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bAcmDKvRTB 00:42:56.842 14:58:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.842 14:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:57.100 nvme0n1 00:42:57.100 14:58:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:57.100 14:58:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:57.100 14:58:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.100 14:58:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.100 14:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.100 14:58:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.358 14:58:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:57.358 14:58:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:57.358 14:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:57.616 14:58:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:57.616 14:58:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:57.616 14:58:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.616 14:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.616 14:58:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.873 14:58:49 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:57.873 14:58:49 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:57.873 14:58:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:57.873 14:58:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.873 14:58:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.873 14:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.873 14:58:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.131 14:58:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:58.131 14:58:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:58.131 14:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:58.388 14:58:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:58.388 14:58:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:58.388 14:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.645 14:58:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:58.645 14:58:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bAcmDKvRTB 00:42:58.645 14:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bAcmDKvRTB 00:42:58.903 14:58:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Yk2Y8JnJTe 00:42:58.903 14:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Yk2Y8JnJTe 00:42:59.161 14:58:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.161 14:58:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.727 nvme0n1 00:42:59.727 14:58:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:59.727 14:58:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:59.985 14:58:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:59.985 "subsystems": [ 00:42:59.985 { 00:42:59.985 "subsystem": "keyring", 00:42:59.985 "config": [ 00:42:59.985 { 00:42:59.985 "method": "keyring_file_add_key", 00:42:59.985 "params": { 00:42:59.985 "name": "key0", 00:42:59.985 "path": "/tmp/tmp.bAcmDKvRTB" 00:42:59.985 } 00:42:59.985 }, 00:42:59.985 { 00:42:59.985 "method": "keyring_file_add_key", 00:42:59.985 "params": { 00:42:59.985 "name": "key1", 00:42:59.985 "path": "/tmp/tmp.Yk2Y8JnJTe" 00:42:59.985 } 00:42:59.985 } 00:42:59.985 ] 00:42:59.985 }, 00:42:59.985 { 00:42:59.985 "subsystem": "iobuf", 00:42:59.986 "config": [ 00:42:59.986 { 00:42:59.986 "method": "iobuf_set_options", 00:42:59.986 "params": { 00:42:59.986 "small_pool_count": 8192, 00:42:59.986 "large_pool_count": 1024, 00:42:59.986 "small_bufsize": 8192, 00:42:59.986 "large_bufsize": 135168 00:42:59.986 } 00:42:59.986 } 00:42:59.986 ] 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "subsystem": "sock", 00:42:59.986 "config": [ 00:42:59.986 { 00:42:59.986 "method": "sock_set_default_impl", 00:42:59.986 "params": { 00:42:59.986 "impl_name": "posix" 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "sock_impl_set_options", 00:42:59.986 "params": { 00:42:59.986 "impl_name": "ssl", 00:42:59.986 "recv_buf_size": 4096, 00:42:59.986 "send_buf_size": 4096, 00:42:59.986 "enable_recv_pipe": true, 00:42:59.986 "enable_quickack": false, 00:42:59.986 "enable_placement_id": 0, 00:42:59.986 "enable_zerocopy_send_server": true, 00:42:59.986 "enable_zerocopy_send_client": false, 00:42:59.986 "zerocopy_threshold": 0, 00:42:59.986 "tls_version": 0, 00:42:59.986 "enable_ktls": false 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "sock_impl_set_options", 00:42:59.986 "params": { 00:42:59.986 "impl_name": "posix", 00:42:59.986 "recv_buf_size": 2097152, 00:42:59.986 "send_buf_size": 2097152, 00:42:59.986 "enable_recv_pipe": true, 00:42:59.986 "enable_quickack": false, 00:42:59.986 "enable_placement_id": 0, 00:42:59.986 "enable_zerocopy_send_server": true, 00:42:59.986 "enable_zerocopy_send_client": false, 00:42:59.986 "zerocopy_threshold": 0, 00:42:59.986 "tls_version": 0, 00:42:59.986 "enable_ktls": false 00:42:59.986 } 00:42:59.986 } 00:42:59.986 ] 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "subsystem": "vmd", 00:42:59.986 "config": [] 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "subsystem": "accel", 00:42:59.986 "config": [ 00:42:59.986 { 00:42:59.986 "method": "accel_set_options", 00:42:59.986 "params": { 00:42:59.986 "small_cache_size": 128, 00:42:59.986 "large_cache_size": 16, 00:42:59.986 "task_count": 2048, 00:42:59.986 "sequence_count": 2048, 00:42:59.986 "buf_count": 2048 00:42:59.986 } 00:42:59.986 } 00:42:59.986 ] 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "subsystem": "bdev", 00:42:59.986 "config": [ 00:42:59.986 { 00:42:59.986 "method": "bdev_set_options", 00:42:59.986 "params": { 00:42:59.986 "bdev_io_pool_size": 65535, 00:42:59.986 "bdev_io_cache_size": 256, 00:42:59.986 "bdev_auto_examine": true, 00:42:59.986 "iobuf_small_cache_size": 128, 00:42:59.986 "iobuf_large_cache_size": 16 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_raid_set_options", 00:42:59.986 "params": { 00:42:59.986 "process_window_size_kb": 1024, 00:42:59.986 "process_max_bandwidth_mb_sec": 0 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_iscsi_set_options", 00:42:59.986 "params": { 00:42:59.986 "timeout_sec": 30 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_nvme_set_options", 00:42:59.986 "params": { 00:42:59.986 "action_on_timeout": "none", 00:42:59.986 "timeout_us": 0, 00:42:59.986 "timeout_admin_us": 0, 00:42:59.986 "keep_alive_timeout_ms": 10000, 00:42:59.986 "arbitration_burst": 0, 00:42:59.986 "low_priority_weight": 0, 00:42:59.986 "medium_priority_weight": 0, 00:42:59.986 "high_priority_weight": 0, 00:42:59.986 "nvme_adminq_poll_period_us": 10000, 00:42:59.986 "nvme_ioq_poll_period_us": 0, 00:42:59.986 "io_queue_requests": 512, 00:42:59.986 "delay_cmd_submit": true, 00:42:59.986 "transport_retry_count": 4, 00:42:59.986 "bdev_retry_count": 3, 00:42:59.986 "transport_ack_timeout": 0, 00:42:59.986 "ctrlr_loss_timeout_sec": 0, 00:42:59.986 "reconnect_delay_sec": 0, 00:42:59.986 "fast_io_fail_timeout_sec": 0, 00:42:59.986 "disable_auto_failback": false, 00:42:59.986 "generate_uuids": false, 00:42:59.986 "transport_tos": 0, 00:42:59.986 "nvme_error_stat": false, 00:42:59.986 "rdma_srq_size": 0, 00:42:59.986 "io_path_stat": false, 00:42:59.986 "allow_accel_sequence": false, 00:42:59.986 "rdma_max_cq_size": 0, 00:42:59.986 "rdma_cm_event_timeout_ms": 0, 00:42:59.986 "dhchap_digests": [ 00:42:59.986 "sha256", 00:42:59.986 "sha384", 00:42:59.986 "sha512" 00:42:59.986 ], 00:42:59.986 "dhchap_dhgroups": [ 00:42:59.986 "null", 00:42:59.986 "ffdhe2048", 00:42:59.986 "ffdhe3072", 00:42:59.986 "ffdhe4096", 00:42:59.986 "ffdhe6144", 00:42:59.986 "ffdhe8192" 00:42:59.986 ] 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_nvme_attach_controller", 00:42:59.986 "params": { 00:42:59.986 "name": "nvme0", 00:42:59.986 "trtype": "TCP", 00:42:59.986 "adrfam": "IPv4", 00:42:59.986 "traddr": "127.0.0.1", 00:42:59.986 "trsvcid": "4420", 00:42:59.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.986 "prchk_reftag": false, 00:42:59.986 "prchk_guard": false, 00:42:59.986 "ctrlr_loss_timeout_sec": 0, 00:42:59.986 "reconnect_delay_sec": 0, 00:42:59.986 "fast_io_fail_timeout_sec": 0, 00:42:59.986 "psk": "key0", 00:42:59.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:59.986 "hdgst": false, 00:42:59.986 "ddgst": false 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_nvme_set_hotplug", 00:42:59.986 "params": { 00:42:59.986 "period_us": 100000, 00:42:59.986 "enable": false 00:42:59.986 } 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "method": "bdev_wait_for_examine" 00:42:59.986 } 00:42:59.986 ] 00:42:59.986 }, 00:42:59.986 { 00:42:59.986 "subsystem": "nbd", 00:42:59.986 "config": [] 00:42:59.986 } 00:42:59.986 ] 00:42:59.986 }' 00:42:59.986 14:58:51 keyring_file -- keyring/file.sh@115 -- # killprocess 1613896 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1613896 ']' 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1613896 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613896 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613896' 00:42:59.986 killing process with pid 1613896 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@969 -- # kill 1613896 00:42:59.986 Received shutdown signal, test time was about 1.000000 seconds 00:42:59.986 00:42:59.986 Latency(us) 00:42:59.986 [2024-11-02T13:58:52.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.986 [2024-11-02T13:58:52.041Z] =================================================================================================================== 00:42:59.986 [2024-11-02T13:58:52.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:59.986 14:58:51 keyring_file -- common/autotest_common.sh@974 -- # wait 1613896 00:43:00.245 14:58:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=1615387 00:43:00.245 14:58:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1615387 /var/tmp/bperf.sock 00:43:00.245 14:58:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1615387 ']' 00:43:00.245 14:58:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:00.245 14:58:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:00.245 14:58:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:00.245 14:58:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:00.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:00.245 14:58:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:00.245 "subsystems": [ 00:43:00.245 { 00:43:00.245 "subsystem": "keyring", 00:43:00.245 "config": [ 00:43:00.245 { 00:43:00.245 "method": "keyring_file_add_key", 00:43:00.245 "params": { 00:43:00.245 "name": "key0", 00:43:00.245 "path": "/tmp/tmp.bAcmDKvRTB" 00:43:00.245 } 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "method": "keyring_file_add_key", 00:43:00.245 "params": { 00:43:00.245 "name": "key1", 00:43:00.245 "path": "/tmp/tmp.Yk2Y8JnJTe" 00:43:00.245 } 00:43:00.245 } 00:43:00.245 ] 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "subsystem": "iobuf", 00:43:00.245 "config": [ 00:43:00.245 { 00:43:00.245 "method": "iobuf_set_options", 00:43:00.245 "params": { 00:43:00.245 "small_pool_count": 8192, 00:43:00.245 "large_pool_count": 1024, 00:43:00.245 "small_bufsize": 8192, 00:43:00.245 "large_bufsize": 135168 00:43:00.245 } 00:43:00.245 } 00:43:00.245 ] 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "subsystem": "sock", 00:43:00.245 "config": [ 00:43:00.245 { 00:43:00.245 "method": "sock_set_default_impl", 00:43:00.245 "params": { 00:43:00.245 "impl_name": "posix" 00:43:00.245 } 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "method": "sock_impl_set_options", 00:43:00.245 "params": { 00:43:00.245 "impl_name": "ssl", 00:43:00.245 "recv_buf_size": 4096, 00:43:00.245 "send_buf_size": 4096, 00:43:00.245 "enable_recv_pipe": true, 00:43:00.245 "enable_quickack": false, 00:43:00.245 "enable_placement_id": 0, 00:43:00.245 "enable_zerocopy_send_server": true, 00:43:00.245 "enable_zerocopy_send_client": false, 00:43:00.245 "zerocopy_threshold": 0, 00:43:00.245 "tls_version": 0, 00:43:00.245 "enable_ktls": false 00:43:00.245 } 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "method": "sock_impl_set_options", 00:43:00.245 "params": { 00:43:00.245 "impl_name": "posix", 00:43:00.245 "recv_buf_size": 2097152, 00:43:00.245 "send_buf_size": 2097152, 00:43:00.245 "enable_recv_pipe": true, 00:43:00.245 "enable_quickack": false, 00:43:00.245 "enable_placement_id": 0, 00:43:00.245 "enable_zerocopy_send_server": true, 00:43:00.245 "enable_zerocopy_send_client": false, 00:43:00.245 "zerocopy_threshold": 0, 00:43:00.245 "tls_version": 0, 00:43:00.245 "enable_ktls": false 00:43:00.245 } 00:43:00.245 } 00:43:00.245 ] 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "subsystem": "vmd", 00:43:00.245 "config": [] 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "subsystem": "accel", 00:43:00.245 "config": [ 00:43:00.245 { 00:43:00.245 "method": "accel_set_options", 00:43:00.245 "params": { 00:43:00.245 "small_cache_size": 128, 00:43:00.245 "large_cache_size": 16, 00:43:00.245 "task_count": 2048, 00:43:00.245 "sequence_count": 2048, 00:43:00.245 "buf_count": 2048 00:43:00.245 } 00:43:00.245 } 00:43:00.245 ] 00:43:00.245 }, 00:43:00.245 { 00:43:00.245 "subsystem": "bdev", 00:43:00.245 "config": [ 00:43:00.245 { 00:43:00.245 "method": "bdev_set_options", 00:43:00.245 "params": { 00:43:00.245 "bdev_io_pool_size": 65535, 00:43:00.245 "bdev_io_cache_size": 256, 00:43:00.245 "bdev_auto_examine": true, 00:43:00.245 "iobuf_small_cache_size": 128, 00:43:00.245 "iobuf_large_cache_size": 16 00:43:00.245 } 00:43:00.245 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_raid_set_options", 00:43:00.246 "params": { 00:43:00.246 "process_window_size_kb": 1024, 00:43:00.246 "process_max_bandwidth_mb_sec": 0 00:43:00.246 } 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_iscsi_set_options", 00:43:00.246 "params": { 00:43:00.246 "timeout_sec": 30 00:43:00.246 } 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_nvme_set_options", 00:43:00.246 "params": { 00:43:00.246 "action_on_timeout": "none", 00:43:00.246 "timeout_us": 0, 00:43:00.246 "timeout_admin_us": 0, 00:43:00.246 "keep_alive_timeout_ms": 10000, 00:43:00.246 "arbitration_burst": 0, 00:43:00.246 "low_priority_weight": 0, 00:43:00.246 "medium_priority_weight": 0, 00:43:00.246 "high_priority_weight": 0, 00:43:00.246 "nvme_adminq_poll_period_us": 10000, 00:43:00.246 "nvme_ioq_poll_period_us": 0, 00:43:00.246 "io_queue_requests": 512, 00:43:00.246 "delay_cmd_submit": true, 00:43:00.246 "transport_retry_count": 4, 00:43:00.246 "bdev_retry_count": 3, 00:43:00.246 "transport_ack_timeout": 0, 00:43:00.246 "ctrlr_loss_timeout_sec": 0, 00:43:00.246 "reconnect_delay_sec": 0, 00:43:00.246 "fast_io_fail_timeout_sec": 0, 00:43:00.246 "disable_auto_failback": false, 00:43:00.246 "generate_uuids": false, 00:43:00.246 "transport_tos": 0, 00:43:00.246 "nvme_error_stat": false, 00:43:00.246 "rdma_srq_size": 0, 00:43:00.246 "io_path_stat": false, 00:43:00.246 "allow_accel_sequence": false, 00:43:00.246 "rdma_max_cq_size": 0, 00:43:00.246 "rdma_cm_event_timeout_ms": 0, 00:43:00.246 "dhchap_digests": [ 00:43:00.246 "sha256", 00:43:00.246 "sha384", 00:43:00.246 "sha512" 00:43:00.246 ], 00:43:00.246 "dhchap_dhgroups": [ 00:43:00.246 "null", 00:43:00.246 "ffdhe2048", 00:43:00.246 "ffdhe3072", 00:43:00.246 "ffdhe4096", 00:43:00.246 "ffdhe6144", 00:43:00.246 "ffdhe8192" 00:43:00.246 ] 00:43:00.246 } 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_nvme_attach_controller", 00:43:00.246 "params": { 00:43:00.246 "name": "nvme0", 00:43:00.246 "trtype": "TCP", 00:43:00.246 "adrfam": "IPv4", 00:43:00.246 "traddr": "127.0.0.1", 00:43:00.246 "trsvcid": "4420", 00:43:00.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:00.246 "prchk_reftag": false, 00:43:00.246 "prchk_guard": false, 00:43:00.246 "ctrlr_loss_timeout_sec": 0, 00:43:00.246 "reconnect_delay_sec": 0, 00:43:00.246 "fast_io_fail_timeout_sec": 0, 00:43:00.246 "psk": "key0", 00:43:00.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:00.246 "hdgst": false, 00:43:00.246 "ddgst": false 00:43:00.246 } 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_nvme_set_hotplug", 00:43:00.246 "params": { 00:43:00.246 "period_us": 100000, 00:43:00.246 "enable": false 00:43:00.246 } 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "method": "bdev_wait_for_examine" 00:43:00.246 } 00:43:00.246 ] 00:43:00.246 }, 00:43:00.246 { 00:43:00.246 "subsystem": "nbd", 00:43:00.246 "config": [] 00:43:00.246 } 00:43:00.246 ] 00:43:00.246 }' 00:43:00.246 14:58:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:00.246 14:58:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:00.246 [2024-11-02 14:58:52.116998] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:00.246 [2024-11-02 14:58:52.117085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615387 ] 00:43:00.246 [2024-11-02 14:58:52.180802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:00.246 [2024-11-02 14:58:52.273314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.504 [2024-11-02 14:58:52.464547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:01.070 14:58:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:01.070 14:58:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:01.070 14:58:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:01.070 14:58:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:01.070 14:58:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.636 14:58:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:01.636 14:58:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.636 14:58:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:01.636 14:58:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.636 14:58:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:01.894 14:58:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:01.894 14:58:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:01.894 14:58:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:01.894 14:58:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:02.460 14:58:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:02.460 14:58:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:02.460 14:58:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.bAcmDKvRTB /tmp/tmp.Yk2Y8JnJTe 00:43:02.460 14:58:54 keyring_file -- keyring/file.sh@20 -- # killprocess 1615387 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1615387 ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1615387 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615387 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615387' 00:43:02.460 killing process with pid 1615387 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@969 -- # kill 1615387 00:43:02.460 Received shutdown signal, test time was about 1.000000 seconds 00:43:02.460 00:43:02.460 Latency(us) 00:43:02.460 [2024-11-02T13:58:54.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.460 [2024-11-02T13:58:54.515Z] =================================================================================================================== 00:43:02.460 [2024-11-02T13:58:54.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@974 -- # wait 1615387 00:43:02.460 14:58:54 keyring_file -- keyring/file.sh@21 -- # killprocess 1613886 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1613886 ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1613886 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613886 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613886' 00:43:02.460 killing process with pid 1613886 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@969 -- # kill 1613886 00:43:02.460 14:58:54 keyring_file -- common/autotest_common.sh@974 -- # wait 1613886 00:43:03.026 00:43:03.026 real 0m15.325s 00:43:03.026 user 0m38.385s 00:43:03.026 sys 0m3.339s 00:43:03.026 14:58:54 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:03.026 14:58:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:03.026 ************************************ 00:43:03.026 END TEST keyring_file 00:43:03.026 ************************************ 00:43:03.026 14:58:54 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:03.026 14:58:54 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:03.027 14:58:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:03.027 14:58:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:03.027 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:43:03.027 ************************************ 00:43:03.027 START TEST keyring_linux 00:43:03.027 ************************************ 00:43:03.027 14:58:54 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:03.027 Joined session keyring: 803544487 00:43:03.027 * Looking for test storage... 00:43:03.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:03.027 14:58:55 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:03.027 14:58:55 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:43:03.027 14:58:55 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:03.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.286 --rc genhtml_branch_coverage=1 00:43:03.286 --rc genhtml_function_coverage=1 00:43:03.286 --rc genhtml_legend=1 00:43:03.286 --rc geninfo_all_blocks=1 00:43:03.286 --rc geninfo_unexecuted_blocks=1 00:43:03.286 00:43:03.286 ' 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:03.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.286 --rc genhtml_branch_coverage=1 00:43:03.286 --rc genhtml_function_coverage=1 00:43:03.286 --rc genhtml_legend=1 00:43:03.286 --rc geninfo_all_blocks=1 00:43:03.286 --rc geninfo_unexecuted_blocks=1 00:43:03.286 00:43:03.286 ' 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:03.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.286 --rc genhtml_branch_coverage=1 00:43:03.286 --rc genhtml_function_coverage=1 00:43:03.286 --rc genhtml_legend=1 00:43:03.286 --rc geninfo_all_blocks=1 00:43:03.286 --rc geninfo_unexecuted_blocks=1 00:43:03.286 00:43:03.286 ' 00:43:03.286 14:58:55 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:03.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.286 --rc genhtml_branch_coverage=1 00:43:03.286 --rc genhtml_function_coverage=1 00:43:03.286 --rc genhtml_legend=1 00:43:03.286 --rc geninfo_all_blocks=1 00:43:03.286 --rc geninfo_unexecuted_blocks=1 00:43:03.286 00:43:03.286 ' 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.286 14:58:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.286 14:58:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.286 14:58:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.286 14:58:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.286 14:58:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:03.286 14:58:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:03.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:03.286 14:58:55 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:03.286 /tmp/:spdk-test:key0 00:43:03.286 14:58:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:03.286 14:58:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:03.287 14:58:55 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:03.287 14:58:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:03.287 14:58:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:03.287 /tmp/:spdk-test:key1 00:43:03.287 14:58:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1615860 00:43:03.287 14:58:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:03.287 14:58:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1615860 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1615860 ']' 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:03.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:03.287 14:58:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:03.287 [2024-11-02 14:58:55.281615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:03.287 [2024-11-02 14:58:55.281718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615860 ] 00:43:03.287 [2024-11-02 14:58:55.340053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.545 [2024-11-02 14:58:55.431333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:03.804 [2024-11-02 14:58:55.704272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:03.804 null0 00:43:03.804 [2024-11-02 14:58:55.736344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:03.804 [2024-11-02 14:58:55.736885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:03.804 347779613 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:03.804 624441885 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1615989 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1615989 /var/tmp/bperf.sock 00:43:03.804 14:58:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1615989 ']' 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:03.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:03.804 14:58:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:03.804 [2024-11-02 14:58:55.806013] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:03.804 [2024-11-02 14:58:55.806089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615989 ] 00:43:04.062 [2024-11-02 14:58:55.865277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.062 [2024-11-02 14:58:55.950155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:04.062 14:58:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:04.062 14:58:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:04.062 14:58:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:04.062 14:58:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:04.320 14:58:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:04.320 14:58:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:04.886 14:58:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:04.886 14:58:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:04.886 [2024-11-02 14:58:56.899954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:05.144 nvme0n1 00:43:05.144 14:58:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:05.144 14:58:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:05.144 14:58:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:05.144 14:58:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:05.144 14:58:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:05.144 14:58:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:05.402 14:58:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:05.402 14:58:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:05.402 14:58:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:05.402 14:58:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:05.402 14:58:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:05.402 14:58:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:05.402 14:58:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@25 -- # sn=347779613 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 347779613 == \3\4\7\7\7\9\6\1\3 ]] 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 347779613 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:05.660 14:58:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:05.660 Running I/O for 1 seconds... 00:43:06.850 4879.00 IOPS, 19.06 MiB/s 00:43:06.850 Latency(us) 00:43:06.850 [2024-11-02T13:58:58.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:06.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:06.850 nvme0n1 : 1.02 4909.06 19.18 0.00 0.00 25849.66 7573.05 33981.63 00:43:06.850 [2024-11-02T13:58:58.905Z] =================================================================================================================== 00:43:06.850 [2024-11-02T13:58:58.905Z] Total : 4909.06 19.18 0.00 0.00 25849.66 7573.05 33981.63 00:43:06.850 { 00:43:06.850 "results": [ 00:43:06.850 { 00:43:06.850 "job": "nvme0n1", 00:43:06.850 "core_mask": "0x2", 00:43:06.850 "workload": "randread", 00:43:06.850 "status": "finished", 00:43:06.850 "queue_depth": 128, 00:43:06.850 "io_size": 4096, 00:43:06.850 "runtime": 1.020155, 00:43:06.850 "iops": 4909.057937274238, 00:43:06.850 "mibps": 19.17600756747749, 00:43:06.850 "io_failed": 0, 00:43:06.850 "io_timeout": 0, 00:43:06.850 "avg_latency_us": 25849.656798012067, 00:43:06.850 "min_latency_us": 7573.0488888888885, 00:43:06.850 "max_latency_us": 33981.62962962963 00:43:06.850 } 00:43:06.850 ], 00:43:06.850 "core_count": 1 00:43:06.850 } 00:43:06.850 14:58:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:06.850 14:58:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:07.108 14:58:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:07.108 14:58:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:07.108 14:58:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:07.108 14:58:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:07.108 14:58:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:07.108 14:58:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.366 14:58:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:07.366 14:58:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:07.366 14:58:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:07.366 14:58:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:07.366 14:58:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:07.366 14:58:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:07.624 [2024-11-02 14:58:59.502735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:07.624 [2024-11-02 14:58:59.503324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d35ea0 (107): Transport endpoint is not connected 00:43:07.624 [2024-11-02 14:58:59.504313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d35ea0 (9): Bad file descriptor 00:43:07.624 [2024-11-02 14:58:59.505312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:07.624 [2024-11-02 14:58:59.505332] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:07.624 [2024-11-02 14:58:59.505345] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:07.624 [2024-11-02 14:58:59.505361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:07.624 request: 00:43:07.624 { 00:43:07.624 "name": "nvme0", 00:43:07.624 "trtype": "tcp", 00:43:07.624 "traddr": "127.0.0.1", 00:43:07.624 "adrfam": "ipv4", 00:43:07.624 "trsvcid": "4420", 00:43:07.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.624 "prchk_reftag": false, 00:43:07.624 "prchk_guard": false, 00:43:07.624 "hdgst": false, 00:43:07.624 "ddgst": false, 00:43:07.624 "psk": ":spdk-test:key1", 00:43:07.624 "allow_unrecognized_csi": false, 00:43:07.624 "method": "bdev_nvme_attach_controller", 00:43:07.624 "req_id": 1 00:43:07.624 } 00:43:07.624 Got JSON-RPC error response 00:43:07.624 response: 00:43:07.624 { 00:43:07.624 "code": -5, 00:43:07.624 "message": "Input/output error" 00:43:07.624 } 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@33 -- # sn=347779613 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 347779613 00:43:07.624 1 links removed 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@33 -- # sn=624441885 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 624441885 00:43:07.624 1 links removed 00:43:07.624 14:58:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1615989 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1615989 ']' 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1615989 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615989 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615989' 00:43:07.624 killing process with pid 1615989 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 1615989 00:43:07.624 Received shutdown signal, test time was about 1.000000 seconds 00:43:07.624 00:43:07.624 Latency(us) 00:43:07.624 [2024-11-02T13:58:59.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.624 [2024-11-02T13:58:59.679Z] =================================================================================================================== 00:43:07.624 [2024-11-02T13:58:59.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:07.624 14:58:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 1615989 00:43:07.883 14:58:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1615860 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1615860 ']' 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1615860 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615860 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615860' 00:43:07.883 killing process with pid 1615860 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 1615860 00:43:07.883 14:58:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 1615860 00:43:08.449 00:43:08.449 real 0m5.303s 00:43:08.449 user 0m9.984s 00:43:08.449 sys 0m1.668s 00:43:08.449 14:59:00 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:08.449 14:59:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:08.449 ************************************ 00:43:08.449 END TEST keyring_linux 00:43:08.449 ************************************ 00:43:08.449 14:59:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:08.449 14:59:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:08.449 14:59:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:08.449 14:59:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:08.449 14:59:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:08.449 14:59:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:08.449 14:59:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:08.449 14:59:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:08.449 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:43:08.449 14:59:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:08.449 14:59:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:08.449 14:59:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:08.449 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:43:10.414 INFO: APP EXITING 00:43:10.414 INFO: killing all VMs 00:43:10.414 INFO: killing vhost app 00:43:10.414 INFO: EXIT DONE 00:43:11.350 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:11.350 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:11.350 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:11.350 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:11.350 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:11.350 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:11.350 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:11.350 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:11.350 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:11.350 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:11.350 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:11.350 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:11.350 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:11.350 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:11.350 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:11.350 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:11.350 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:12.725 Cleaning 00:43:12.725 Removing: /var/run/dpdk/spdk0/config 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:12.725 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:12.725 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:12.725 Removing: /var/run/dpdk/spdk1/config 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:12.725 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:12.725 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:12.725 Removing: /var/run/dpdk/spdk2/config 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:12.725 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:12.725 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:12.725 Removing: /var/run/dpdk/spdk3/config 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:12.726 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:12.726 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:12.726 Removing: /var/run/dpdk/spdk4/config 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:12.726 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:12.726 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:12.726 Removing: /dev/shm/bdev_svc_trace.1 00:43:12.726 Removing: /dev/shm/nvmf_trace.0 00:43:12.726 Removing: /dev/shm/spdk_tgt_trace.pid1233284 00:43:12.726 Removing: /var/run/dpdk/spdk0 00:43:12.726 Removing: /var/run/dpdk/spdk1 00:43:12.726 Removing: /var/run/dpdk/spdk2 00:43:12.726 Removing: /var/run/dpdk/spdk3 00:43:12.726 Removing: /var/run/dpdk/spdk4 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1231601 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1232344 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1233284 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1233660 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1234306 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1234450 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1235169 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1235291 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1235559 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1236835 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1237684 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1237995 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1238201 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1238530 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1238733 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1238890 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1239042 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1239239 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1239814 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1242305 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1242476 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1242636 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1242765 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243070 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243199 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243527 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243652 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243825 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1243949 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1244120 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1244130 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1244624 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1244783 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1244991 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1247210 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1249734 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1257479 00:43:12.726 Removing: /var/run/dpdk/spdk_pid1257888 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1260408 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1260685 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1263218 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1266940 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1269131 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1275664 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1280920 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1282128 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1282797 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1293524 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1296076 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1351259 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1355048 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1359005 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1362867 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1362873 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1363522 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1364171 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1364711 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1365241 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1365250 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1365500 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1365524 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1365640 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1366182 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1366829 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1367487 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1367883 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1367891 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1368032 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1369041 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1369773 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1375078 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1404411 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1407330 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1408463 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1409726 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1409855 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1409994 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1410135 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1410698 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1412013 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1412755 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1413186 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1414881 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1415244 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1415801 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1418201 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1421599 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1421600 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1421601 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1423767 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1425902 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1429426 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1452358 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1455112 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1458874 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1459817 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1461020 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1462502 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1465454 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1467698 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1471942 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1472049 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1474830 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1474964 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1475099 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1475368 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1475493 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1476573 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1477749 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1478927 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1480102 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1481288 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1482523 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1486392 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1486733 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1488011 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1488754 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1492547 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1495053 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1498475 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1501795 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1508274 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1512740 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1512746 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1525409 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1525898 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1526336 00:43:12.985 Removing: /var/run/dpdk/spdk_pid1526866 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1527798 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1528360 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1528875 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1529284 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1531789 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1531943 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1535735 00:43:12.986 Removing: /var/run/dpdk/spdk_pid1535906 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1539261 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1541758 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1548662 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1549068 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1551567 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1551835 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1554346 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1558060 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1560304 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1567177 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1572373 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1573670 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1574330 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1584400 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1586652 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1588641 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1593687 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1593726 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1596663 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1598700 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1600127 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1600872 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1602394 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1603148 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1608447 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1608807 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1609201 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1610755 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1611034 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1611431 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1613886 00:43:13.244 Removing: /var/run/dpdk/spdk_pid1613896 00:43:13.245 Removing: /var/run/dpdk/spdk_pid1615387 00:43:13.245 Removing: /var/run/dpdk/spdk_pid1615860 00:43:13.245 Removing: /var/run/dpdk/spdk_pid1615989 00:43:13.245 Clean 00:43:13.245 14:59:05 -- common/autotest_common.sh@1451 -- # return 0 00:43:13.245 14:59:05 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:13.245 14:59:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:13.245 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:43:13.245 14:59:05 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:13.245 14:59:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:13.245 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:43:13.245 14:59:05 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:13.245 14:59:05 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:13.245 14:59:05 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:13.245 14:59:05 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:13.245 14:59:05 -- spdk/autotest.sh@394 -- # hostname 00:43:13.245 14:59:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:13.503 geninfo: WARNING: invalid characters removed from testname! 00:43:52.208 14:59:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:54.745 14:59:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:58.036 14:59:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:00.574 14:59:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:03.866 14:59:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:06.403 14:59:58 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:09.696 15:00:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:09.696 15:00:01 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:44:09.696 15:00:01 -- common/autotest_common.sh@1681 -- $ lcov --version 00:44:09.696 15:00:01 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:44:09.696 15:00:01 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:44:09.696 15:00:01 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:09.696 15:00:01 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:09.696 15:00:01 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:09.696 15:00:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:09.696 15:00:01 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:09.696 15:00:01 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:09.696 15:00:01 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:09.696 15:00:01 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:09.696 15:00:01 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:09.696 15:00:01 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:09.696 15:00:01 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:09.696 15:00:01 -- scripts/common.sh@344 -- $ case "$op" in 00:44:09.696 15:00:01 -- scripts/common.sh@345 -- $ : 1 00:44:09.696 15:00:01 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:09.696 15:00:01 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:09.696 15:00:01 -- scripts/common.sh@365 -- $ decimal 1 00:44:09.696 15:00:01 -- scripts/common.sh@353 -- $ local d=1 00:44:09.696 15:00:01 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:09.696 15:00:01 -- scripts/common.sh@355 -- $ echo 1 00:44:09.696 15:00:01 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:09.696 15:00:01 -- scripts/common.sh@366 -- $ decimal 2 00:44:09.696 15:00:01 -- scripts/common.sh@353 -- $ local d=2 00:44:09.696 15:00:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:09.696 15:00:01 -- scripts/common.sh@355 -- $ echo 2 00:44:09.696 15:00:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:09.696 15:00:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:09.696 15:00:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:09.696 15:00:01 -- scripts/common.sh@368 -- $ return 0 00:44:09.696 15:00:01 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:09.696 15:00:01 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:44:09.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.696 --rc genhtml_branch_coverage=1 00:44:09.696 --rc genhtml_function_coverage=1 00:44:09.696 --rc genhtml_legend=1 00:44:09.696 --rc geninfo_all_blocks=1 00:44:09.696 --rc geninfo_unexecuted_blocks=1 00:44:09.696 00:44:09.696 ' 00:44:09.696 15:00:01 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:44:09.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.696 --rc genhtml_branch_coverage=1 00:44:09.696 --rc genhtml_function_coverage=1 00:44:09.696 --rc genhtml_legend=1 00:44:09.696 --rc geninfo_all_blocks=1 00:44:09.696 --rc geninfo_unexecuted_blocks=1 00:44:09.696 00:44:09.696 ' 00:44:09.696 15:00:01 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:44:09.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.696 --rc genhtml_branch_coverage=1 00:44:09.696 --rc genhtml_function_coverage=1 00:44:09.696 --rc genhtml_legend=1 00:44:09.696 --rc geninfo_all_blocks=1 00:44:09.696 --rc geninfo_unexecuted_blocks=1 00:44:09.696 00:44:09.696 ' 00:44:09.696 15:00:01 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:44:09.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.696 --rc genhtml_branch_coverage=1 00:44:09.696 --rc genhtml_function_coverage=1 00:44:09.696 --rc genhtml_legend=1 00:44:09.696 --rc geninfo_all_blocks=1 00:44:09.696 --rc geninfo_unexecuted_blocks=1 00:44:09.696 00:44:09.696 ' 00:44:09.696 15:00:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:09.696 15:00:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:09.697 15:00:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:09.697 15:00:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:09.697 15:00:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:09.697 15:00:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.697 15:00:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.697 15:00:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.697 15:00:01 -- paths/export.sh@5 -- $ export PATH 00:44:09.697 15:00:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.697 15:00:01 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:09.697 15:00:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:44:09.697 15:00:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1730556001.XXXXXX 00:44:09.697 15:00:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1730556001.t8wbzR 00:44:09.697 15:00:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:44:09.697 15:00:01 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:44:09.697 15:00:01 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:09.697 15:00:01 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:09.697 15:00:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:09.697 15:00:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:09.697 15:00:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:44:09.697 15:00:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:09.697 15:00:01 -- common/autotest_common.sh@10 -- $ set +x 00:44:09.697 15:00:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:09.697 15:00:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:44:09.697 15:00:01 -- pm/common@17 -- $ local monitor 00:44:09.697 15:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:09.697 15:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:09.697 15:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:09.697 15:00:01 -- pm/common@21 -- $ date +%s 00:44:09.697 15:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:09.697 15:00:01 -- pm/common@21 -- $ date +%s 00:44:09.697 15:00:01 -- pm/common@25 -- $ sleep 1 00:44:09.697 15:00:01 -- pm/common@21 -- $ date +%s 00:44:09.697 15:00:01 -- pm/common@21 -- $ date +%s 00:44:09.697 15:00:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730556001 00:44:09.697 15:00:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730556001 00:44:09.697 15:00:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730556001 00:44:09.697 15:00:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730556001 00:44:09.697 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730556001_collect-vmstat.pm.log 00:44:09.697 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730556001_collect-cpu-load.pm.log 00:44:09.697 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730556001_collect-cpu-temp.pm.log 00:44:09.697 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730556001_collect-bmc-pm.bmc.pm.log 00:44:10.636 15:00:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:44:10.636 15:00:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:10.636 15:00:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:10.636 15:00:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:10.636 15:00:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:10.636 15:00:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:10.636 15:00:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:10.636 15:00:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:10.636 15:00:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:10.636 15:00:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:10.636 15:00:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:10.636 15:00:02 -- pm/common@44 -- $ pid=1628370 00:44:10.636 15:00:02 -- pm/common@50 -- $ kill -TERM 1628370 00:44:10.636 15:00:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:10.636 15:00:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:10.636 15:00:02 -- pm/common@44 -- $ pid=1628372 00:44:10.636 15:00:02 -- pm/common@50 -- $ kill -TERM 1628372 00:44:10.636 15:00:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:10.636 15:00:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:10.636 15:00:02 -- pm/common@44 -- $ pid=1628374 00:44:10.636 15:00:02 -- pm/common@50 -- $ kill -TERM 1628374 00:44:10.636 15:00:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:10.636 15:00:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:10.636 15:00:02 -- pm/common@44 -- $ pid=1628402 00:44:10.636 15:00:02 -- pm/common@50 -- $ sudo -E kill -TERM 1628402 00:44:10.636 + [[ -n 1139180 ]] 00:44:10.636 + sudo kill 1139180 00:44:10.646 [Pipeline] } 00:44:10.662 [Pipeline] // stage 00:44:10.667 [Pipeline] } 00:44:10.683 [Pipeline] // timeout 00:44:10.688 [Pipeline] } 00:44:10.703 [Pipeline] // catchError 00:44:10.709 [Pipeline] } 00:44:10.725 [Pipeline] // wrap 00:44:10.731 [Pipeline] } 00:44:10.745 [Pipeline] // catchError 00:44:10.754 [Pipeline] stage 00:44:10.756 [Pipeline] { (Epilogue) 00:44:10.769 [Pipeline] catchError 00:44:10.770 [Pipeline] { 00:44:10.783 [Pipeline] echo 00:44:10.784 Cleanup processes 00:44:10.791 [Pipeline] sh 00:44:11.074 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:11.074 1628579 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:11.075 1628699 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:11.090 [Pipeline] sh 00:44:11.373 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:11.373 ++ grep -v 'sudo pgrep' 00:44:11.373 ++ awk '{print $1}' 00:44:11.373 + sudo kill -9 1628579 00:44:11.386 [Pipeline] sh 00:44:11.669 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:23.887 [Pipeline] sh 00:44:24.173 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:24.173 Artifacts sizes are good 00:44:24.187 [Pipeline] archiveArtifacts 00:44:24.195 Archiving artifacts 00:44:24.393 [Pipeline] sh 00:44:24.736 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:24.750 [Pipeline] cleanWs 00:44:24.760 [WS-CLEANUP] Deleting project workspace... 00:44:24.760 [WS-CLEANUP] Deferred wipeout is used... 00:44:24.767 [WS-CLEANUP] done 00:44:24.769 [Pipeline] } 00:44:24.787 [Pipeline] // catchError 00:44:24.801 [Pipeline] sh 00:44:25.081 + logger -p user.info -t JENKINS-CI 00:44:25.090 [Pipeline] } 00:44:25.104 [Pipeline] // stage 00:44:25.109 [Pipeline] } 00:44:25.124 [Pipeline] // node 00:44:25.130 [Pipeline] End of Pipeline 00:44:25.179 Finished: SUCCESS